Wherein the author compares Yggdrasil, tinc, Tailscale, Zerotier, Netmaker, Nebula, and ends up prefering Yggdrasil. Actually, it was this comparison that made me look into Netmaker and prefer it and I've been running it without issue for experimentation. I hope the author revisits NM.
I agree that the project is moving quickly, which results in having to stay on top of changes to your configuration with releases, but they're still on 0.xx releases so it's to be expected.
Since Netmaker is just vanilla wireguard with some route coordination and STUN/TURN, it's reasonable for me to wrap my head around system. Docs are written for self-hosters. These are all good signs.
Hi! Netmaker here. At the time of this writing this is true, but we're making some licensing changes this sprint, which I think will make people very happy. We started with SSPL just because it's much easier to go from more restrictive to less restrictive, as opposed to the alternative.
However, several months ago we moved all of the client-side code to Apache-2.0, and are about to make the server-side code FOSS-compatible.
IANAL. I'm afraid you have it the other way around. It is easier to switch from permissive licenses (MIT / 3BSD) to compatible restrictive license (MPL / xGPL) than vice versa (xGPL to MPL / MIT) without a CLA.
We do have a CLA, and put it in place for this very reason. It is more complex legally, but I meant more in terms of the community. Better to make people happy by going less restrictive over time then to start out with Apache and switch to something more restrictive and upset a bunch of people.
I'm surprised they went for the SSPL, though. Its terms are too vague about the distinction between hosting and just using, and also at the time I read this entry in HN about how it might actually be unenforceable anyway:
Same here. Nebula might not be wireguard based, but it is open source and very stable.
Though, I have to say that if you are using Nebula within a LAN it sometimes takes minutes for two nodes to realize they are within each other's reach. In the mean time, they talk through the lighthouse which can severely affect performance.
Liberal licenses are alive and well for libraries but for finished and polished products expect more of this. People are tired of being free labor for hustlers and billion dollar companies.
“Thanks for all the work, we’ll take that and put it behind a paywall and monetize it for you and not give you anything.”
People other than the author have a huge advantage when it comes to monetizing: they can focus only on that. The authors have to focus on creating while others can focus only on marketing.
I don't have much sympathy for that argument. AGPL (or dual-licensed AGPL + commercial) is the simple countermeasure for freeloaders. Most tech companies won't touch it with a 20-foot pole.
Didn't seem to be such of a silver bullet for MongoDB, which used AGPL and still were forced to abandon it in favor of an even stronger version made by themselves (the SSPL, which is an AGPL on steroids).
I'm not sure if there are more such cases, but clearly with this one, AGPL was not enough.
I guess the issue is: if you have freeloaders, who are making all the money you would need to keep the OSS project alive, AGPL doesn't help you in any way. They are free to get the code as-is, run it, and serve their customers. Not sure why everybody suggests AGPL+Commercial as the perfect solution, it doesn't seem to solve anything for that case.
> Most tech companies won't touch it with a 20-foot pole.
This!, in my experience lawyers will deny the use of AGPL software, even when there is no risk (from my point of view). Many companies have already a list of blessed licenses, and AGPL is not part of the ones I'm aware of.
This phenomenon is mostly just GPL-phobia. Microsoft spent billions in the 90s to convince corporate legal that the GPL is "viral" in ways it is not. You could simply rename the license to something other than the letters G-P-L and this would go away.
SSPL is a bit more restrictive, but politics also have a lot to do with it. "Open Source" is just a term, technically anyone could call their license open source, but most people only consider a license open source if the OSI (a foundation) specifically approves it. Mongo tried and failed to get SSPL approved by OSI: https://blog.tidelift.com/what-i-learned-from-the-server-sid...
Interesting. According to the article, it seems like the biggest complaint was that Mongo was a for-profit company and couldn’t be trusted? I agree that for-profit companies can’t be trusted, but I’m not sure I agree with the statement “that’s not open source because the license was written by a for-profit company”.
That article is obfuscating why the license was going to be rejected by the OSI. The OSI has as part of their definition of open-source that there can be no field of use discriminators. It had nothing to do with the fact that it was drafted by a commercial company. OSI has approved plenty of licenses drafted by for-profit companies (e.g. Intel, IBM, Microsoft.)
How come AGPL doesn't run afoul of #10 of the OSD definition, which is:
> 10. License Must Be Technology-Neutral
> No provision of the license may be predicated on any individual technology or style of interface.
Under AGPLv3 if I have AGPLv3 code on a computer and users can interact with it the requirements depend on the technology used by the users to interact with the program.
It had a provision that only applies to users who are "interacting with the remotely through a computer network".
So...if my users are at the same location as the server and interacting through a command line interface on serial terminals those provisions do not apply.
I want to add a few more terminals in a nearby room but don't want to actually run serial lines from all of them to the server. Instead I run ethernet to the room the new terminals will be in, and at each terminal place an RPi with the terminal connected to the RPi and the RPi connected to ethernet. The RPi runs software that connects to the server via ssh and then exposes that ssh session on the terminal so they can use the command line interface to that AGPL program.
Now the users are interacting with the server via a computer network and so those provisions of AGPL might now apply, depending on whether or not this counts as interacting "remotely".
Same thing but now I provide an app that users can run on their phones that that makes a hard coded connection to my server and runs a terminal emulator over that to a terminal session on the server, where the users can use the AGPL program. There doesn't seem any question that this is now definitely remote, and it is over a computer network, so those AGPL provisions now definitely apply.
And so we have essentially one thing, users using a terminal interface to interact with an AGPL command line program on my server, where what license provisions apply between me and any given user depends on just what technology is used to carry their typed text between their keyboard and my server, and to carry the program output text between my server and their display.
That provision is intended to exclude licenses that say things like "you must show a modal GUI window crediting this software" which would exclude the software from being used in a server application, library or command-line interface. Or a clause like "you must provide a copy of this software on a zip-disk if asked", which over time would become increasingly hard to do.
The AGPLv3 is not predicated on a particular technology or interface so it doesn't run afoul of this. You can use it in networked software, or un-networked software. If a license said something like "you cannot use this for software that users interact with over a network" then it would violate this principle.
The use of relays is common when you connect to the VPN in the enterprise networks that often allow only outgoing 80/443. Also, you may not use the relays all the time, but you may use them sometimes throughout the day. If there is a vulnerability, one connection may be enough to compromise the security.
Their comparison graph at the bottom seems to indicate that the differentiating features between their product and Tailscale is that you can't self-host (ignoring the existence of headscale) and that WireGuard support is limited. I believe the latter point refers to the default Tailscale configuration that connects every node with every other node, whereas NetMaker allows different network configurations.
However, Tailscale ACLs should allow you to reconfigure the network into shape you want, so I'm not sure if that criticism still applies. Their claim that "data will pass through their relay (DERP) servers fairly regularly" also seems suspect, as that's only the case for networks where UDP traffic doesn't flow between clients despite STUN/TURN, which is very rare in practice.
The only advantage I can find is that NetMaker has a richer free plan and that they use the WireGuard kernel module where possible. I'm not sure why they didn't lead with that.
- speed: netmaker is faster because it uses kernel wg - this is not going to hold true for all system configurations, certainly doesn't for macos
- flexibility: feels like it does the same as tailscale, marketed slightly different as they list common use-cases – egress and ingress gateways; network shaping with acls is also possible in ts; maybe someone will write-up an unbiased comparison on self hosting
- price: ts offer seems to be good enough for most users and their limits are "soft" anyway – i pay $45 a year because i want sustainable, not free
Surprisingly, we improved the performance of wireguard-go (running in userspace) enough to make it faster than WireGuard (running in the kernel) in the best conditions. But, this point of comparison likely won’t be long-lived: we expect the kernel can do similar things.
Huh, would be interesting to see some non-tailscale benchmarks of this. Assuming the kernel impl is actually optimized it should be theoretically impossible to exceed the performance with userland wg?
I ran the same benchmarks they listed here[0], and did some practical tests. As of a week after the article being written, Tailscale was faster than kernel wireguard.
i don't route any traffic besides the odd exit-node, and tailscale takes care of that for me
the exit-node routing is for firewall circumvention - if i find myself unable to connect to git over ssh, i simply activate routing through the exit-node and continue working
Is the kernel module how they claim the 5x performance over tailscale? I haven't really done any real tailscale performance metrics but can't see how else they can claim this (unless there is infrastructure performance differences).
Afaik, tailscale recently made changes to their go user space implementation that actually made their version faster than the kernel implementation, at least in some cases.
I remember reading a blog post on tailscacles' website about it and how they are pushing their changes upstream (wg kernel and official wg go user space implementation).
My hazy understanding from previous conversations was the performance advantages claimed is basically how things were "out of the box" for comparison, for certain situations. However I've seen people claim the differences quickly close when someone that knows what they are doing optimizes a setup. Additionally other situations are much more similar.
While it's a good idea to be skeptical of a company tooting it's own horn, there does (or has) seemed to be a consistent performance advantage that is beneficial to those who aren't paid for or like being a network admin beyond setting a proper MTU: https://techoverflow.net/2022/08/19/iperf-benchmark-of-zerot...
Hi, worth noting another point on this. Netmaker has "Client Gateways", which allow you to generate and modify raw WireGuard config files. This is extremely useful for integrating custom WireGuard setups. For instance, generate a config file, modify it, put it on a router, and boom, site-to-site. https://www.netmaker.io/features/ingress
I think it's worth doing your own investigation on how often traffic is getting relayed via Tailscale. We don't have numbers on it, but have had users who experienced very high latency with Tailscale, and after doing some traffic analysis, discovered it was getting relayed halfway across the country. Tailscale does a fantastic job at NAT traversal, but it's still a worthwhile consideration.
I've been running a tinc mesh network for eons w/ my systems and it's never given me any trouble. I use git to check in the 'hosts/' folder and add/remove hosts as needed, pull down to all the nodes, and they can all connect.
I do wish the encryption + transport could be as performant as wireguard, but for my needs, I haven't been pushing it hard enough that it's a concern for me.
Tinc works, but is not really stable for my use case: strange network environment thanks to my school. It frequently falls into infinite loops, dropping all packets and fully use a CPU core (on Windows, Linux looks fine). It seems stable on an all-Linux network, but the moment a Windows client is added, things can go wrong.
It also does not really have a decent mobile client.
Thats very strange. I use tinc-vpn exclusivly for my network.
Both Linux, Win32, and FreeBSD. Everything is very stable.
But I have to admit, I have private fork of tinc-vpn specifically for Win32.
AFAIR I did only minor changes to TAP driver initialization and how
scripts are executed. They have they own thread now and additionaly there is script called tinc-pre to handle IP initialization before TAP interface is up.
Becuase meh, Windows Network Interfaces work strange :)
I went with Dormant as 1.1 has been in development for years and no official stable release. Changes to 1.1 are the odd PR here and there, nothing really from the main author anymore.
Yeah things like improving the encryption I would expect even with a stable but active product like this. Especially given ChaCha20’s widespread adoption and optimisation these days. Likewise I would have liked to have seen decent tinc phone clients (ios, Android). But this is a failing of a lot of VPN clients.
Just look at the release interval on their news page.
Honestly I think the likes of WireGuard, Tailscale, etc took the steam out of the developer. It’s a shame because I do prefer to have a diverse range of VPNs rather than just OpenVPN, IPsec, WireGuard.
It's not so much the other VPN products out there, rather no time and no other core developers. There have been lots of people contributing, some much more than others, but usually it was just to scratch their itch, after which they move on (which is perfectly fine).
I'm not sure how to revitalize development if there is not a large interest from developers, and I don't want to turn this into something commercial like OpenVPN did.
Same here. Im very happy with tinc-vpn. It can easly push 100Mbit traffic
through my severs and thats enough for my needs. Also, auto-mesh is very nice feature. I do NOT want to forward traffic via my central hubs.
Note: I use tinc-vpn only in switch (L2) mode. For routing good old quagga (forked) does it job.
Maybe a dumb question, but what are the advantages of such products compared to just configuring a "plain" wireguard server i.e. on OpenBSD? I'm not a network expert and still it was pretty simple. Do these products offer more features? What kind of features?
You will soon find you have a combinatorial problem. If you have 10 nodes and want to add an 11th node, you will have to update the 10 already existing nodes.
These projects are automating the configuration of the nodes. You simply configure a new node and the other nodes are informed of the presence of the new guy.
Worth noting, the WireGuard creator has specifically mentioned these sort of management features (user auth, automated configuration and coordination, ACL's) as out-of-scope of the WireGuard project. He wanted to keep it as simple as possible, and left it to 3rd parties to develop VPN platforms using WireGuard.
> How can you trust another company for such an important security service like VPN?
It’s pretty easy, you just look at Gartner’s Magic Quadrant and pick one in the upper right hand corner. Any VPN solution can have vulnerabilities and any (same) company would keep it up to date. A lot of hardware firewalls have ASICS to accelerate VPN traffic and this may be a requirement to handle the amount of traffic a company might have.
A pain point I still haven't resolved with WG is this. From my phone, I want to access my homelab through the WG server at home, but everything else through an external WG VPN somewhere else. My homelab ip range is 10.10.0.0/24 or whatever, but the external VPN is some other range. Wireguard doesn't seem to like this. The alternative is to route my phone to home for 100% of traffic, and my home router would egress through the external VPN. But I don't want my home network to be the bottleneck, I don't exactly have the best Internet service at home. From a cursory search through NetMaker docs, I didn't see anything that explicitly shows how to configure vpns, or, I don't know what to call it, bridging multiple wireguard networks.
A single wg interface/config on your phone, with two peers, one for home wg and one for remote vpn, with AllowedIPs on the home peer as 10.10.0.0/24 and AllowedIPs on the remote vpn peer as 0.0.0.0/24?
Have you tried excluding the cidr of your home wg from the AllowedIPs of the remote vpn? Like not having 0.0.0.0/24, but one or more entries that end up excluding local cidrs.
The external VPN provider dictates what my interface's IP range is. I generated a new WG config using the UI, and it comes with interface IP 10.66.123.123/32. If I want my home ip to be 10.10.0.2/32, then I'll have to set it as 10.0.0.0/8, and the VPN provider didn't seem to agree with that.
I have two wg interfaces and I manually switch between them.
You should be able to have two peers. The only thing I see is, that you may need to explicitly define the routing.
The first peer is easy, with allowed IPs of 10.10.0.0/8. The 2nd peer will need more configs, as it should route everything else.
See this answer for an example where all ranges that are not RFC1918 are listed: https://serverfault.com/a/304791
I expect, that you need to enter those ranges as AllowedIPs for the peer that should route to the public internet.
Netmaker may help with this. You configure an "Egress Gateway" to 0.0.0.0/0 as your internet VPN, inside of a Netmaker network of 10.10.0.0/24. We do our routing rules differently and it is meant to be compatible. However, I'm a bit surprised you have this issue with regular WireGuard, as it tends to be quite stable for this sort of setup in my experience.
I've been using Netmaker for a few months now and it is incredible. Bastion VPN management for all our enviornments.
The only thing we havne't gotten to work is full 0.0.0.0 forwarding. Docs say it's possible (tho not fully common use case), but we always get hangs when attempting. Usually we have to use sshuttle
Curious what other people’s setups are with the server hosting. Do you expose Netmaker bastion to the public Internet for all your VPC, where the bastion lives outside the VPC? That’s what they recommended to me when I set mine up but I also explored putting my bastion inside my VPC and exposing it to the internet
> are you accessing using "external clients" or the regular netclient?
"External Clients" on OSX Wireguard.
[More Info]
The use case that we have is when we need access an Akami Network through a whitelisted IP during development.
Our AWS networks have a Priv Subnet w/ a static IP NAT and a Public Subnet, both prod and staging.
Since wanted our all our local machine's traffic to go through the AWS NAT we hoped for: Local -> Bastion EC2 (Public Subnet) -> EC2 (Private Subnet) -> NAT -> Internet.
So to get setup, we tested: Local -> Bastion EC2 (Public Subnet) -> Internet. When we set the Bastion EC2 to have Egress of 0.0.0.0 the Wireguard's Handshake would never complete, just hang.
Let me know if there's anything else I can provide.
Netmaker is legit and backed by Y Combinator. It's based on Wireguard (like Tailscale) and is focused primarily around Kubernetes use. In my opinion, it is a better solution than any other zero trust networking solution due to it offering built in ACLs and traditional Wireguard/VPN features as well as the P2P encrypted mesh overlay.
You may find it easier to work with zerotier or tailscale, but NetMaker is something to keep an eye on. I met the founder at KubeCon last year and he's a really approachable and nice guy. You can always reach out to him directly if you have specific questions or concerns.
Digging through their repo on github... I'm really disappointed at the lack of `_test.go` files. This is a security product, I'd be terrified something bad could happen.
Golang is relatively easy to unit test if the code is architected to enable it (with di-like patterns) and from experience, that will catch things that integration tests can easily miss. I would rather see both sets of tests for something dealing with security.
It's definitely lacking on the unit-testing side and we should do more there, but we think integration tests are the more important of the two in this scenario, because the most fragile bits are the interactions between client-client and client-server. It's a lot harder to self-contain those tests without deploying the platform and clients on various infrastructures.
i'd argue that is exactly what you want to unit test. any breakage of client->unit test client or server->unit test server, should be a big no no. otherwise you easily risk backwards compatible issues.
which ones did you try and what was your take on each?
i only tried one, after seeing it mention on hn and elsewhere multiple times by people i trust, but i'm curious about what hn users experience with other services
I have used netmaker and their netclients inside my kubernetes cluster to have access to all k8s pods over a secure wireguard tunnel. No need to worry about ssl certificates on my admin dashboards for my longhorn, pihole, portainer, linkerd and other admin dashboards. Also, no need to k8s port forward everytime I wanna access a specific service inside the cluster network. Netmaker and kubernetes for secure and manageable cluster network access is unbeatable I think.
I have to connect a Mac on latest OS to a Windows 7 PC on the same LAN to get access to an established VPN (L2TP/IPsec) on that machine.
I have never seen a confusing space as the VPN world. While I can connect e.g. to a SoftEther VPN Server on the windows box just fine, the installation breaks the established VPN connection to the external L2TP VPN.
Installed Wireguard and have absolutely no idea on how to configure that thing without investing hours...
This should be a simple problem to solve without an additional VPN. On the Mac, you should be able to add a static route for the VPN destination and point it to the IP if the Windows box. This is assuming that Windows firewall isn’t blocking the incoming connection from your Mac.
Tangent question about WireGuard: I use the VPN at work, but the DNS does not seem to be working properly on MacOS. The VPN is configured to use an internal IP for DNS, like 192.168.10.200, but when the VPN is on, that address is not routing properly (it seems the route is not going through the VPN so nothing is found, even if the VPN config does include that IP range). Anyone knows how to get this sort of stuff fixed?
Wireguard requires an open UDP port, by default 51820.
NetMaker, at least based on the quick install manual, asks you to open up the following:
- 443, 80 (tcp)
- 3479, 8089 (TURN, TURN api)
- 8085 (exporter EE)
- 1883, 8883, 8033, 18083 (if using EMQX)
But perhaps none of these are required for actual WAN/Wireguard connections and one needs only limited access to these ports in order to configure the software.
You can lock it down a good amount:
- 80 is only required for Caddy to request certificates. If you BYO certs, you can take that off
- TURN is optional, so if you disable TURN then dont need 3479 or 8089
- The remaining ports are only for specific features (EMQX and Prometheus exporter) which are not enabled by default.
So really, you could get it down to just 443. However, this should be better documented.
Also worth noting these are all server-side requirements. The actual WireGuard clients do not need these ports open.
i use fireguard through tailscale to access multiple machines over ssh, with their setup i was able to reduce exposure as i no longer need to open a port on the router
tailscale does this with their DERP servers
i doubt netmaker doesn't have an alternative to connect machines behind nat routers; that would be a serious disadvantage for soho setups
How can I connect my Android or IOS device to my Netmaker VPN?
Currently meshing one of these devices is not supported, however, it will be soon. For now, you can connect to your VPN by making one of the nodes an Ingress Gateway, then create an Ext Client for each device. Finally, use the official WG app or another WG configuration app to connect via QR or download the device’s WireGuard configuration.
NAS should work if it's linux or freebsd based. For mobile you (currently) have to use our "client gateway", which accesses the network just using a regular WireGuard config file, which you can scan from the Netmaker UI using the WireGuard app on your phone. But it works well. However, we've got a mobile app in the works.
I like it. Use it to get around my CGNAT with StarLink (client running on home lab, docker client running on VPS with a docker instance of Nginx proxy manager joined to the netmaker client docker network so I can proxy back to my home lab).
These over the top services handle node discovery and meshing for you, instead of you having to feed configurations manually down to each device you want to hook up to wireguard.
Netmaker guy here, and I'll be the first to tell you that if you have a static setup of, let's say 5 machines or less, then there's no need for something as complex as Netmaker. It's really useful for people who have many machines, or machines that will move around dynamically. Or, if you need to route traffic through a NAT gateway. A static setup is fine for technical people and small networks, it's just not scalable. As an analogy, you wouldn't run Kubernetes if you just need to deploy 3 docker containers, but as the complexity grows, you need a management system.
One thing I like about Netmaker is it establishes VPN tunnels between devices in a peer-to-peer manner using the Wireguard protocol and has an interface to manage it all.
people that want to connect: roaming machines, machines behind nat
i use it to connect a notebook to a machine that sits behind nat, and to circumvent firewall rules
before i had to maintain an open port on the router, and a dyndns like process to connect to the machine behind nat, now not only is that machine no longer exposed on the internet i also don't have to keep the separate configs for the same machine (local and remote)
in regards to circumventing firewall rules it's just git with keys vs git over http, i just proxy jump through the home machine and call it a day
You can think of Netmaker as essentially a mesh network configuration manager that uses Wireguard under the hood to create the mesh network. It’s easy enough to connect two machines with wg. But when you need to manage fleets of them, you need something that automates discovery, onboarding and off boarding of machines, etc
(Not associated with Netmaker at all, that’s just my interpretation from using their offering for awhile)
First, I get that the landing page didn't answer your questions. Building the right "first" page for the right customer... that's a potential opportunity in of itself!
I believe that tailscale recently (~3 months ago?) made some significant performance improvements and this comparison was from before that. But, they can have performance differences because, I believe, on Linux netmaker uses the kernel Wireguard where tailscale uses a user-space go implementation.
For the particular case of creating a wireguard mesh network in kubernetes, I've been quite happy with Kilo[0]. Does anyone with experience in both kilo and netmaker know how they compare?
Kilo is cool! And works. It will be similar, just sort of depends on what you're comfortable with and what sort of management features you need. Some people like a UI where they can see all their nodes and troubleshoot without having to SSH, which is a primary advantage, but Kilo is probably better for smaller setups that are purely for Kubernetes.
I remember looking at both and deciding on Netmaker at the time. Netmaker should be able to do a strict superset of what kilo does. I was able to get Netmaker running on Kubernetes by injecting the client onto each node via a daemonset.
It can do fancy things but it's definitely geared towards more of a power user base.
I think it's main appeal over all of the other solutions (i hope this is still the case) is that it uses kernel level wireguard instead of userland wireguard so for performance it is unrivalled. It's what I would pick if I was setting up a mesh for servers that constantly talked to each other and exchanged a lot of data
Does netmaker have the equivalent of tailscale's DERP servers yet that run on port 443 and proxy connections through https/websocket to bypass restrictive firewalls? Userland wireguard perf is worse than kernel wireguard but I don't want to give up DERP servers.
your not the first to complain about it[0], the tone is uncalled for - you can just cmd + w that tab and move on + use it as a signal when picking your service provider
Your post was correctly flagged because it broke several of the site guidelines, which include:
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
Or maybe, just maybe, posting a comment that is egregiously outside the guidelines of the site means that people will see it and appropriately flag it.
Not everything is political and it’s showing that that is what you immediately jumped to.
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
> Please don't use Hacker News for political or ideological battle. That tramples curiosity.
> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
Seeing as this service(?) is, quote, "Backed by Y Combinator" I decided being nice about it on Y Combinator isn't worth my time nor that of any readers passing by.
Not to mention websites breaking my scroll bar and disrespecting my time are so common these days they indeed really aren't interesting, which is all the more reason to call them out nicely or otherwise.
> I decided being nice about it on Y Combinator isn't worth my time nor that of any readers passing by.
That’s great and all, but the guidelines are there for a reason. Break them enough and you will eventually be banned. And once again, breaking them is a valid reason to get flagged and downvoted, whether you like it or not.
> Not to mention websites breaking my scroll bar and disrespecting my time are so common these days they indeed really aren't interesting, which is all the more reason to call them out nicely or otherwise.
The logic here really doesn’t follow. Like even remotely. If the topic isn’t interesting then your comment is even less so.
>The logic here really doesn’t follow. Like even remotely. If the topic isn’t interesting then your comment is even less so.
Not pointing out a problem means I implicitly accept them as not problems. I find the aforementioned to be problems, so I will point them out whether nicely or otherwise.
You just said it happens so often that it’s no longer interesting. If it’s not interesting, then you shouldn’t care enough to comment. Nor is your comment remotely interesting to yourself, the author, or anyone else.
Which is why there’s a whole guideline about it. Shocking, I know.
> so I will point them out whether nicely or otherwise.
And you will continue to be downvoted and flagged when your comments, which we both know are not anywhere near “nicely”, are against the guidelines.
Whether something is interesting and whether something is a problem are two different things.
The first step to addressing a problem is calling it out, which I will do because I do not appreciate websites breaking my scroll bar, among other transgressions, no matter how mundane it becomes.
Wherein the author compares Yggdrasil, tinc, Tailscale, Zerotier, Netmaker, Nebula, and ends up prefering Yggdrasil. Actually, it was this comparison that made me look into Netmaker and prefer it and I've been running it without issue for experimentation. I hope the author revisits NM.
I agree that the project is moving quickly, which results in having to stay on top of changes to your configuration with releases, but they're still on 0.xx releases so it's to be expected.
Since Netmaker is just vanilla wireguard with some route coordination and STUN/TURN, it's reasonable for me to wrap my head around system. Docs are written for self-hosters. These are all good signs.