Hacker News new | past | comments | ask | show | jobs | submit login
WireGuard Gives Linux a Faster, More Secure VPN (wired.com)
741 points by axiomdata316 on March 2, 2020 | hide | past | favorite | 295 comments



I really like wireguard, but one thing that bugs me is the fact that it's layer 3 (an ip tunnel) and has no code to support layer 2 (ethernet MAC tunnel). The downside for me is that you have to manage static ips in the configurations (specifically it's not compatible with ipv6 slaac and NDP). There is https://git.zx2c4.com/wg-dynamic but it's very experimental at the moment.

The level 3-only tunnel is motivated as "the cleanest approach for ensuring authenticity and attributability of the packets" (in the whitepaper), but in fact every claim and routing algorithm described (needed since the tunnel is many-to-one) would work equally well substituting "ip address" with "mac address" (i may be missing something, but for sure it's not explicit anywhere). And indeed imho it would be less surprising to have an "allowed mac address" option in the configuration than an "allowed ip address": it's already common practice to white-list mac address of physical endpoints (in office). I'm toying with the idea of forking the driver code to adapt it to ethernet frames as i don't think it would need any big rewrite but i'm realizing my inexperience in writing kernel code.


In most scenarios, you want to avoid L2 tunnels to reduce complexity and/or performance issues.

The chain of thought typically goes like this:

* Remote networks are connected via L2 tunnel.

* ARP requests are broadcasted over L2 tunnel to all connected networks, introducing scalability issues

* Proxy ARP is introduced to cache ARP responses

* Proxy ARP may become out of date or not scale as the L2 domain grows.

* BGP is introduced to keep track of and broadcast all topology changes

* How do you mitigate issues caused if Proxy ARP fails?

Most of these issues go away if you use IP tunnels instead of Ethernet because IP was designed to be routable.

For your point on security... Whitelisting MAC addresses doesn't provide security. These are trivial to spoof. Same with IP. Please start relying on cryptographic primitives to establish workload identity instead. I highly suggest looking at SPIFFE to get started here.

If you must send L2 over the VPN, please go use a L2 EVPN which is designed to handle the complexity and provide fault tolerance. There are numerous SDNs out there you can use to implement this including Tungsten Fabric and OpenDaylight. No need to complicate Wireguard to support EVPN.

[edited to improve formatting of bullets and clarity of wording]


It should be possible to run GRE, L2TP, or VXLAN over WireGuard although such tooling probably doesn't exist yet.


Sure but it hurts a bit to run a tunnel on top of another tunnel, and since you have to run wireguard as-is, you still have to do the static ip thing. It's a bit insane to have ethernet > udp (l2tp) > ip > udp (wireguard) > ip > ethernet. That's at least 128 bytes overhead per frame (udp/ip: 2*48, l2tp: 4, eth: 14, wireguard: 14).


I've run VXLAN over top of wireguard connections. One advantage is that you can have multiple intermediate wireguard connections that are not visible at the VXLAN layer.


What tooling do you need for this? Shell scripts would be the traditional approach.


Yeah, as long as the shell script has been audited. It would probably be easy to accidentally send the GRE traffic in the clear instead of through WireGuard.


Doesn’t seem that easy to fuck up, you just use internal WG IPs for the L2 tunnel. Packets just won’t go anywhere if WG is down.


I don't have a problem with it being layer 3 rather than layer 2. But the lack of dynamic configuration is a bit of an issue. I don't care too much about needing static ip addresses, but I do want to be able to push down dynamic routes and dns servers to clients.


That it doesn’t act as a DHCP server or anything but what it’s for is my main reason for loving it.

Logical functionality should be decoupled to ease maintenance, reduce risk from complexity.

Config management tools can dump some text in a file for corporate needs.

Apply it over SSH or via git.

It’s just a bit of text to lay down.


Substituting MAC for IP address is exactly what ZeroTier does. MACs can't be spoofed, though nodes can be designated as bridges and that allows them to impersonate MACs.

There's still the issue of authenticating IPv4 IPs though, which are too small to embed anything useful into. ZeroTier has a certificate system for that but it requires the use of the rules engine to enable it.


MACs can be spoofed. There are entire companies which begin their sales pitch with "So, I can poison the ARP cache to take over your DNS in your Kubernetes Cluster" due to the NET_RAW capability required to respond to ICMP (ping). :)

You'll want to use a crypto based identity if you want to ensure spoofing isn't occurring. Even then, you can still be DOSed by a malicious actor. Tools like eBPF may be able to help here by filtering out source MAC addresses that don't match the source interface's hwaddr.

edit: Sorry, I didn't read this comment properly. In ZeroTier I can believe that they cannot be spoofed across the VPN due to relying on a cryptographic hash. :)


You've made me look at ZeroTier. It's a bit of a shame you are being down voted because ZeroTier looks to be original, clever, and open source.

Your down voting is caused by you used the word MAC without defining what it is, so naturally people think it's a "Media Access Control Address", or a "Message Authentication Code", but it's far more complex than either. It is an address so it does perform the same function as a "Media Access Control Address", but [0] says it is "computed from the public portion of a public/private key pair. A node’s address, public key, and private key together form its identity.", and uses proof of work to prevent forgeries. Thus your statement that "MACs can't be spoofed" is correct, or at least is unless someone breaks it. The "proof of work" bit did cause an eyebrow to rise, as it is vulnerable to exponentiation drops in the price of computing.

For those still reading, my (very brief) look at ZeroTier is it does far more than IPSec / Wireguard - it solves the internet scale routing problem in it's own way, address spoofing and a number of things as well. It's undoubtedly far simpler to use than Wireguard or OpenVPN, as routing with those protocols in large networks is a complete PITA. It treats IP rather like IP treats Ethernet - as a fabric it runs on top of that unlike Ethernet connects most nodes on the planet. For nodes that aren't fully connected (like those behind a NAT, it creates paths (ie, does routing), and if multiple paths are available uses several concurrently to get the best throughput.

[0] is well worth a look if you are curious about such things. I am going to take a much closer look when I get time.

[0] https://www.zerotier.com/manual/


Can you not set your MAC to whatever you want on many platforms?


I think they're implying that MACs are authenticated the same way IP addresses are authenticated with Wireguard (you can say "only 52:54:00:7a:cc:dd can talk over this connection").


> MACs can't be spoofed

lol

20 years ago, in college, some folks in the dorm had fun fucking with others at the Ethernet level. Most of us only had experience with the IP level, so couldn't understand what was going on.


ZeroTier doesn't permit spoofed MACs unless explicitly configured.


and how can they tell?


MACs are computed directly from cryptographic hashes. For normal ZeroTier P2P traffic the MAC and Ethernet header are elided entirely too, which saves about 14 bytes of per-frame overhead.


Wait, are we talking about MAC as in cryptographic Message Authentication Code, or MAC as in ethernet Medium Access Control (addresses)?


MAC as in 48-bit Ethernet addresses.

https://github.com/zerotier/ZeroTierOne/blob/master/node/MAC...

There's a tiny chance of a collision, but for it to be meaningful you'd have to have a virtual LAN with millions of devices on it.


Out of curiosity, what is the use case for doing VPN at layer 2?


Pretty much every time I do a migration from one data center or office migration I set up an OpenVPN that bridges the network segments at the two locations. It makes the move so much easier.

Once set up, I can shut down a machine at one location, move it, bring it back up, and it's back in business. There are situations where we might want to migrate to new machines during the move, which this makes no harder. But for many things it makes them easier.

For example, the last move went something like this: Set up the VPN+bridge. Move half the application servers. Set up new firewall/load balancer since we were replacing the old ones. Test the new fw/lb. Physically move the primary database server during a maintenance window and switch over to the new fw/lb. If there were problems, just switch back to the old one via DNS record changes (TTL was lowered weeks earlier). Move the remaining app servers. During the bridging setup, the LBs preferred the local app servers.


It's been a while since I've worked with linux networking. I would have thought it would give you a VIF in some form or fashion that you could attach to a bridge. Is that not the case?

EDIT: *and proxy arp requests through


By "it" do you mean Wireguard? I haven't used it, but you need a special type of virtual interface for bridging, a tap device can do it, a tun cannot. From some searches, Wireguard doesn't support operating on a bridge. OpenVPN, which is what I've used in the past, supports both tun and tap interfaces.


Also icmp, ipv6 brings a lot of new things to the table in that realm. I already said it above, but the neighbor discovery protocol is quite useful to do dynamic in-band configuration. icmp is over ip, but it's not useful if the ip link is already managed by the tunnel protocol.


Someone else said broadcast/multicast, so I'll also add communication with legacy systems that don't speak IP or have other wacky requirements. These do exist in industrial and embedded settings. It's a niche use case but it's very useful there.


I suspect Novel/IPX is still out there


Yes, and more. Check out what runs on factory floors sometime. There's stuff that speaks naked Ethernet, as in you type the MAC of the machine into the application. There's also stuff that speaks CANbus over Ethernet without IP in the middle.


Raw ethernet is nice because you can't send ethernet frames from a web browser.


Wow. Had no idea. Fascinating!


I went through training on it back in 2012 - apparently it (at least at the time, not sure about now) was dominant in the australian mining industry, so the larger tertiary education providers were requested to at least familiarise students with it.

It was a strange beast but there were a few odd spots it was better than active directory - e.g. an "Organizational Role" could be created and have a user assigned to it, so you could more easily seperate the user (john smith) from their position and the permissions that go with it (finance director). So when john smith retires it is trivial to replace the occupant of the finance director organisational role.


I've always wondered why we don't use that as our subject for all sorts of business needs. I'm talking about normal employee-to-employee business in addition to more technical things like security groups and so forth. Don't email Karen, email [whatever her role is], at least for official requests pertaining explicitly to defined job responsibilities.

That way the sender doesn't get delayed by unknown turnover, and the new recipient has full history to look back upon instead of starting cold.


Broadcast/multicast propagation.


Can you say more to explain the utility of this to those unfamiliar with networking?


Many device discovery protocols work by sending out broadcast or multicast packets (either to announce themselves to devices who might be listening or to request devices to send them data). These packets are expected to go out to either everyone on the same layer-2 network (the broadcast case) or everyone who has subscribed to a particular multicast address (the multicast case).

In addition to device discovery, these are frequently used for heartbeat messages to indicate that you are still alive (for high-availability protocols like VRRP).


Thanks! That's helpful/interesting and makes sense


One common use case is multicast DNS (https://en.wikipedia.org/wiki/Multicast_DNS) which uses multicast for individual hosts to publish services available on them to other hosts on the network without needing a dedicated DNS server.


You can send data to N hosts by only sending it once, rather than N times.


Zerotier does a perfect job ..


I disagree - spent a considerable amount of time with zerotier as a possible replacement of a small sized ipsec mesh (4 sites) and it failed horribly. Had commercial support, different hardware and even virtualized it. Latency was a major issue and quality of the links were erratic to say the least. Don't get me wrong, I think zerotier is great, but it's not prime time.


I've had a similar experience. In particular links will just "drop out" for periods of time. The public forwarding nodes were overburdened for quite a while. I set up my own "moon", but one of the sites has a cranky NAT, which will let a connection through for a while, then fail. It seems to take at least 30 seconds for zerotier to "notice" this and switch back to forwarding via the moon. Maybe the new multipath will help?


How is the VPN responsible for your crappy underlay network?


Rather obviously it isn't. I'm not sure why you'd even ask.

I'm not the only one with external NAT that I can't do anything about; the question is what to do to mitigate this.

Switching to an explicit hub-and-spoke model would work around this, but at the expense of what I consider one of ZeroTier's biggest strengths: transparent meshing. If two machines in the network are on the same LAN, I'd like them to use that rather than the network.

Faster detection of the failure of the NAT-piercing peer-to-peer link, with fallback to the "moon" while the peer-to-peer link is being re-established, would substantially increase the usability for people, like me, who are stuck with the NAT they've got. As I alluded to, the new multi-path features that ZeroTier is getting might help with that.


Your physical network settings likely didn't allow direct connections between peers. Fix that and it'll work fine.


If it’s replacing an ipsec mesh that’s pretty hard to believe. And if that was the issue and commercial support couldn’t even identify that as the cause, ZeroTier has bigger issues.


If all sites are behind symmetric NATs, there's not much ZeroTier could do to help aside from telling him to assign direct mappings on the NAT/Firewall to each ZT instance. Symmetric NATs are antithetical to peer to peer communication. Many I've run across in the wild have special rules to handle IPSec which won't exist for other lesser known protocols. It's also possible the user wasn't willing or able to make network configuration changes to make those p2p connections possible. Without seeing what the user tried & support recommended, it's not really fair to throw out such baseless accusations.


ZeroTier uses UDP. That's hardly "lesser known" than IPSec.


"lesser known" as in protocols such as IPSec, ZeroTier, WireGuard, etc. Of which IPSec has been around forever and many NATs/Firewalls have special handling rules built in, just as @api mentioned in another comment. Yes, ZeroTier uses UDP underneath, but that doesn't mean symmetric NATs don't/won't cause havoc to peer to peer protocols using UDP.


Wrong layers of the network. IPSec is comparable to TCP/UDP, not wireguard/zerotier. It’s L4 and NAT can’t have enough intelligence to setup IPSec meshes without explicit configuration.

Finally, how can ZeroTier’s support be so incompetent to not recognize connectivity issues between endpoints? That’s one of the few things that goes wrong with tunnel meshes.


It was probably behind finicky and heavily restrictive symmetric NAT (very p2p-hostile) but with IPSec ALG in the NAT, making it work fine with IPSec but horribly with anything else. This is common in "enterprise" settings and hard to diagnose without direct remote access to run NAT characterization tests.

Symmetric NAT basically breaks everything that doesn't use a simple client/server hub-and-spoke networking model.


Yeah, i hear about that regularly but didn't look into it. I must say i'm not really happy about the whole business thing. The arch wiki says you need an account, i'm not sure if that is true but if it is, it's a non-starter for me. If you have good technical refs to prove me wrong i'd be happy to hear.


You only need an account if you're using ZeroTier's hosted network controller service. You can run your own controller as well. The code to do so is in every instance of ZeroTier. https://github.com/zerotier/ZeroTierOne/tree/master/controll...


Agreed. That's my daily driver. No pun intended.


Hmm, but MACs (and Ethernet?) are on their way out with IPv6, replaced by specifically software-set GUIDs ?


Most IPv6 packets are encapsulated in Ethernet frames which use MAC addresses. What may change is the vendor specific MAC address could be replaced with a MAC address generated by a cryptographic hash to preserve privacy.


Privacy from what? MAC addresses are not used on the internet, only on the local network


Not all networks are private. Also, many IPv6 addresses contain the MAC address which would effectively deanonymize you over the internet.


So instead of not using the MAC address in the IPv6 (which any reasonably modern OS does because this problem is old, well known and trivially solved) you get rid of MAC addresses altogether? Just so you can have some illusion of privacy while sending your traffic through a supposedly compromised network?

This is paranoia about all the wrong things, focusing on irrelevant details and ignoring what’s important.


I don't believe I suggested getting rid of the MAC address altogether. Ethernet isn't going away anytime soon.

The suggestion is simply this: Don't embed your MAC address into your IPv6 address because it's a unique identifier that can deanonymize you even if you shift networks.


Check out Algo [0] if you're interested in setting up a personal WireGuard VPN server. It's simple and hassle-free, especially if you are not familiar with server administration and don't want to be bogged down by details.

I have one deployed on Digital Ocean ($5/mo droplet). All you need to do is run the setup script, answer a few yes/no questions (optional features), paste in your API key, and update the firewall setting on Digital Ocean's dashboard.

If anything goes wrong, deploying a new one only takes minutes.

[0] https://github.com/trailofbits/algo


I honestly had no idea that DigitalOcean has a ”built-in” firewall. That’s awesome. Thank you!


Don't all cloud providers have these for their compute VMs? (e.g,. AWS security groups)


Speaking only for myself, I don't want to run my own firewall server. I want a reliable firewall service that I can use, and is multi-region/multi-az with load balancing and other features that I cannot feasibly implement myself.

At least, not without spending a great deal of my time or a great deal of my personal money to build a suitable rugged production-grade service.


+1 for Algo, it's better than Streissand imo.


What makes it better? I've been using Streisand quite successfully, and the setup appears easier from the little I've read about Algo (haven't implemented myself yet)


Personally I like Algo because it also handles provisioning a dedicated VPS with some sane system defaults. It also sets up IKEv2 and Wireguard together, which makes it easier to have a multi-platform VPN out of the box without having to set up Wireguard & IPsec separately. It may not be what everyone wants though, Streissand is great in its own ways too.


Been using Algo for years. 11/10 would use again. (I'm Chinese living in China)


This is good, however I noticed DigitalOcean's IP blocks are banned by a few sites.


can vouch for algo, been using it for a while and has no issues at all


In the case that someone has any trouble configuring WireGuard, I would like to share my automatic deployment of WireGuard and Unbound with full IPv4 and IPv6 support with Packer and Terraform in Hetzner Cloud (although it can be easily adapted to other providers) [1].

In the case that no automatic deployment is necessary, it may also be useful to look directly at the WireGuard configuration [2]. Since WireGuard supports scripts in "PostUp" and "PostDown", I have automated the configuration of iptables, including some useful rules to redirect 53/UDP port traffic from the public interface to WireGuard, which helps in some cases to bypass some firewalls.

[1]: https://github.com/hectorm/wireguard-setup

[2]: https://github.com/hectorm/wireguard-setup/blob/master/packe...


This is awesome, I'll be giving it a go this week. Thanks!


Increasingly it seems like heavily opinionated foundational tools and frameworks are overtaking more highly configurable alternatives, at least in terms of breadth of usage or popularity.

Could this be a positive change? Does this represent a healthy response cognitive fatigue in a world with configuration options at every possible layer?

Or does this shift to less readily configurable tools represent an overall negative? Are we losing diversity in favor of a more vulnerable monoculture crop?

Or both?

Asking for real, not sarcastically. As a developer I’m a huge proponent of simpler, more opinionated frameworks for most projects but I’m also aware my perspective is more limited than many HN commenters.


At least in some ways, it feels to me like Wireguard is more of a return to the "unix philosophy" (if there is such a thing) when compared to solutions like OpenVPN and ipsec/StrongSwan. Doug McIlroy, amongst the designers of Unix, said that tools should "Do One Thing And Do It Well." Wireguard seems like a great example: it offers very few knobs and levers in large part because the scope of its capabilities is very small. Wireguard manages the actual tunnel between endpoints, everything else (managing interfaces and routes, disseminating keys, autoconfiguring) is left for other tools. But, Wireguard provides a simple and friendly enough interface that it's easy to write other tools to do these tasks, ranging all the way from shell scripts to some big enterprise system.

This stands in clear contrast to OpenVPN, which attempts to manage all aspects of the VPN management process from endpoint config (interfaces, routes, etc) to key dissemination (strongly preferring mutual TLS auth and specifying a format for importable VPN configs). As a result, we could say that OpenVPN "Does Everything And Does It Okay," which I'd like to coin as the opposite philosophy. This has advantages if you have some kind of complicated situation and want to keep everything inside of one tool, but the result is that OpenVPN is more complicated to use and configure, and has more surface area to attack.

To some extent this kind of limited scope comes off as opinionated but I would like to view it the opposite way: Wireguard is unopinionated in that it leaves a large portion of the VPN stack for you to handle yourself, either manually or by bringing your own tool. This is a bit annoying if you're looking for a turnkey solution, but also makes Wireguard very simple and easy to understand and audit.


> if there is such a thing

There is indeed such a philosophy:

https://www.jwz.org/doc/worse-is-better.html


> Could this be a positive change?

It's normal and expected evolution of protocols and software.

Generation 1: New idea, new implementation. As people become comfortable with the new idea it gains in acceptance and hype. Try to keep it simple and fast, but it's a exercise in exploration and it gains technical debt faster then it gains new features.

Generation 2: Widespread acceptance and commercialization. Groups inside large corporations, and sometimes entire businesses, spring up around the new idea. They re-implement the idea to reduce technical debt and add flexibility. Features are piled on to make it marketable. Eventually becomes heavy and unwieldy.

Generation 3: Hype train dies down and people have learned what really matters and what really should be focused on. Third generation is lean, fast, and 'correct'. It becomes ubiquitous, people stop caring about it and people stop paying for it. It becomes just something that is always there and ends up little more then a building block for the next new idea.


Generation 4: Bloat the software with so many unnecessary features, the users must want to chat with each other no?


There isn't any generation 4. The functionality of the software is ubiquitous by that point. Nobody cares anymore except when they absolutely have no other choice. By that point even if you generated a brand new implementation you would struggle to give it away unless it was just one small part of a new innovation.


Mustn't forget that social media sharing to show that we are also hip and down with the fellow kids.


This sounds like Jared Spool's Market Maturity model.[0] He breaks up your gen 3 into two separate stages:

Stage 1: Raw Iron Stage 2: Checklist Battles Stage 3: Productivity Wars Stage 4: Transparency

[0] https://articles.uie.com/market_maturity/


TLS has shown how the quest for backwards compatibility has the unintended consequence of downgrade attacks. Wireguard's lack of cryptographic agility is a feature, not a bug. Sure, it means everyone has to upgrade when a new version of the protocol comes out, but the entire point of a VPN is security.

That said, OpenBSD's OpenIKEd is just as simple and efficient, and thanks to standard compliance (IPsec, IKEv2 and MOBIKE) it works out of the box with iOS devices.


> Sure, it means everyone has to upgrade when a new version of the protocol comes out,

It will be interesting to see what happens when (or if) large enterprises and hardware vendors adopt it.


I think Opinionated can be good. I think configurable can be good too. I think the best case is nearly always "Configurable, with smart defaults" meaning defaults that work out of the box for most uses.

Definitely programming languages are on the periphery of this conversation, but I think provide some good examples of why I like opinionated tools in general.

My language of choice right now is Go, and has been for a while. One of the things I like about it is that it's a bit opinionated. For example:

Braces around `if` statements aren't optional. I prefer this to other C-Like languages that allow you to leave out braces for one-liners.

Also the document "Effective Go" exists, which lays out the canonical "best" ways of doing a lot of things. The language doesn't force you to do these things, but there is an authoritative source that makes good suggestions.

The Antithesis of opinionated languages in my opinion is Ruby. I personally hate Ruby, but I know there are a lot of people that love it. I hate it because there are too many ways to of accomplishing the same tax, and to me this makes it harder to read. Go, on the other hand is the easiest language for me to read, largely because of `gofmt`, another thing that doesn't force you to do it a certain way, but strongly encourages a standard end result.


"My language of choice right now is Go, and has been for a while. One of the things I like about it is that it's a bit opinionated."

I've frequently described Go as a very, very good 1990s language. Going through the process of maturity takes time. You can't have a "very, very good" 2020s language right now, because at the frontier we're still feeling our way through the issues.

(Remember, whatever you're about to hit reply with and try to contradict me about it being a totally smooth and polished 2020s language that's already here is also an assertion that your example basically has no room for improvement and will not improve in the next 10-20 years. Consider your options carefully before you go too "language partisan" here.)

I believe probably >75% of the hatred Go engenders is from people afraid that Go's success will erase or invalidate the 2010s/2020s languages they prefer, because otherwise, the solution to most of these people's hate/anxiety would be to just ignore Go. To which I can say to those people, you can stop worrying. It won't. And if you stay in the industry long enough, maybe someday you'll get to use the really good and polished 2010s or 2020s language. No idea what it'll be called. And you can similarly assuage the fears of the day that this new language will erase all the benefits of the 2040s languages in development at the time.

But for "opinionated" to really work, I think you intrinsically need to have years of experience to make the right calls. There's no realistic chance that we could have gone straight to the "correct" VPN choice in one shot. Too many variables, too many dimensions, too much to learn and know about the security. It's just not possible. We collectively need the decades.


[flagged]


It's because you're posting in the flamewar style, which is what we're trying to avoid on this site. Also, it's super off topic—two generic hops away from the OP.

Programming language flamewars are a special case and not in a good way. Many of us lived through seeing those take over online communities in the past and reduce them to scorched earth. That's one of our motivations for wanting to keep HN from that fate, or at least stave it off a bit longer: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Actually your comment could be a good contribution with a little editing. The first link seems good. The second is too flamey.


I agree that programming languages are an interesting to view over that axis.

I've done a fair amount with Elm, which is undoubtedly hugely opinionated, doing things like locking JavaScript interop behind a message passing system and baking protection from XSS into the language.

Mostly I'd say this all encourages you to do things a better way, but it can be painful, and particularly given the early nature of the language, meeting the edges of the language can be very painful because of it.

In contrast, I adored working with Scala because it was so powerful, but it sits close to Ruby in the "you can do everything a million different ways" rankings. The more I did with it, the more I wanted a refined subset of what was there (which may be what Dotty/Scala 3 ends up being).

Things like "you must always use braces on if statements" are rules I always end up enforcing using tooling anyway because they are just bugs waiting to happen, and are the low-hanging fruit of this debate. Too many language take the approach of "if we can parse it, it's fine", when really the aim should be to make it clear not just to a parser, but to the person reading/writing the code too. Hopefully more languages are more opinionated about that kind of thing in the future.


> I think Opinionated can be good.

It can be. In fact it's almost essential if you are handing out knives to children; you want someone doing who is very opinionated about the dangers posed by sharp knives.

There was a recent HN post making the same point about JWT's. JWT's allow the null cipher, and in the hands of people who might not appreciate the disaster caused production code accepting JWT's using the null cipher you want a _very_ opinionated implementation that prevents it.

But for me, such an implementation would be a total pain in the arse. The null cipher is there to make debugging easier - getting things working for the first time can be very difficult without it. You can stick your opinions on whether I should be using it where the sun don't shine.


Ideally, things should be opinionated but configurable. I think of that as having good, sane defaults, with a straightforward initial setup that doesn't revolve around tweaking those defaults.

With a security product, however, I can understand the allure of offering few to no options. Laypeople get security wrong at an alarming rate, even with good defaults, so I often don't mind a security product just offering one configuration that the (presumable) security experts who built it have decided is the right way to use it.

Of course, if they turn out to be wrong about something, and a mitigation would be "disable feature X", then this requires a patch and new release, when it might have otherwise just required a configuration change.


Taken openssh. You get sane defaults and still have a ton of settings to fiddle with when you encounter odd edge cases or comlex scenarios such jumping through a chain of proxy hosts or talking to some legacy embedded ssh server and such things.


It's a necessary change and a response to cognitive overload. Everything I have to think about in an existing system is mental energy I can't spend on more useful tasks, like creating something new.

In the Information Age, attention and cognitive bandwidth have become precious and limited commodities that should never be wasted on any unnecessary concern.

I have a rule of thumb in relation to product or project adoption: every installation step cuts adoption in half. If 1000 people find a project and it has a 5 step installation, approximately 32 of them will install it. Make it a 7 step installation and that number is cut down to about 8.


This is not some strange paradigm shift, it's the UNIX model 101 [0]

[0] https://en.wikipedia.org/wiki/Unix_philosophy#Program_Design...


It definitely depends on context.

There are some cases where opinionated is just fundamentally better. I'd say that a good example of this is code formatting tools. These have been historically highly configurable, but that creates huge amounts of room for bikeshedding and conflict, where consistency is by far the most important thing and the actual style itself barely matters unless you start getting silly.

I think it essentially becomes a sliding scale on how much consistency and "getting it right" are more important over having something be optimal or perfectly adapted to the situation.

When it comes to security tools like VPNs, reducing the chances for users to shoot themselves in the foot is almost always more important than anything else, so it seems like another area that would be beneficial to have something more opinionated rather than more configurable, so the decisions are in the hands of people who have invested the time in understanding the problems at hand.


I think there's room for both. I understand your question, but just because there's a trend towards one way or the other, doesnt mean developers should just go with what mainstream is moving towards (not that that's what you're saying.)

The main problem I have with highly configurable utils is that a lot of them don't have sane defaults (or any defaults), which might be ok considering most users want-to or enjoy spending hours writing custom config, but it's a big ask for things I want to use quickly, or just try.

So, imo it depends on the software.


>heavily opinionated foundational tools

Opinionated isn't the right phrase here. For something like webdev there are 200 "right" ways to do things. For encryption "throw out all the legacy crap and focus on 1 known strong tech" isn't opinionated...it's common sense.

...if you have the luxury of a fresh start. Which mainline kernel has granted wireguard.

Monoculture...maybe...but I reckon the tons of legacy stuff in openvpn is way more dangerous. Especially because it kind forces mass adoption of weaker cyphers due to compatability BS.


> Could this be a positive change?

For cryptographic (and related) applications, it certainly seems standard engineering advice now to reduce choices and configurations to a minimum [1] (but apparently not 2 decades ago, when OpenSSL, OpenVPN, and GPG were initially released).

[1] pretty sure Bruce Schneider et al. recommended it in their 2010 book Cryptography Engineering.


> As a developer I’m a huge proponent of simpler, more opinionated frameworks

Until you run into the limits, of course. If you control both sides, you can use what you want, but as soon as you implement just one side ...


I think opinionated is the real deal. You can make smarter software and focus on features and on the way create a better future. Being able to twist and specialize everything is not always good.


As a tech-affine user I aporeciate simpler tools. I love configurability but if I have to fight with every tool it's really hard to stick with Linux...


Opinionated is great as long as it allows for future backwards compatibility. This sort of thing is critical for things like this that depend on cryptography. There has to be a way to support the old thing at the same time as the new thing when it looks like the old thing might eventually have to be swapped out. There has to be a way to do the transition.


This is the opposite of what cryptography engineers believe today.


Which ones? How do they suggest that cryptographic upgrades occur?


In the cryptography world backwards compatibility is basically "let the adversary switch me back to the old and busted protocol so I can be owned even after I upgraded to the latest version."


Or, in the DROWN case, ricochet the new protocol off the old protocol to use individual elements of the old protocol to break the new one.


Most required upgrades do not involve anything "busted". Weaknesses are often noticed long before any practical attacks are available. If you want to upgrade, say, Wireguard in such a case you would have to switch over the endpoints in pairs. Obviously that is going to be impossible in practice so the system will get backward compatibility grafted on in a fragile and dangerous way.

OpenPGP is an example of a case where relatively extreme backwards compatibility is required as old archived messages have to be accessible. But that isn't a problem because things are such that downgrade attacks are impossible. The list of desired methods is in the public key which is signed with itself. So downgrades are not always an issue.


You can straight up google 'pgp' and 'downgrade attack' so maybe that's not that great an example.


Do you have an actual example? Normally when people talk about a downgrade attack on OpenPGP they just assume it is somehow possible without actually checking that it is.

Note that I am only claiming that downgrade attacks are technically impossible for OpenPGP due to the way that it works. To break the protection against downgrades means that you have to break the root cryptography. That might not be true for other stuff... Makes for a great example though...


Something like this:

We introduce wireguard2, which is not wire-protocol-compatible with original wireguard. The same configuration files can be used, but you must generate new keys as part of your switch over.

We strongly advise you to stop using original wireguard if there is any possibility of a wealthy, organized, determined attacker intercepting your communications. (See CVE2021-x. and forthcoming paper "64 qubits can deduce Curve25519 points" by D.J. Bernstein et al.)


So at midnight July 23 2026 everyone upgrades to wireguard2 all at once? Perhaps I am not getting what you are proposing here...


Just like with TLS and its "ciphersuites", you expose the vulnerable components for as long as (1) you're required to by your users and (2) the risk is bearable. At some point, you stop exposing the vulnerable component at all. Ciphersuite negotiation doesn't free you from this requirement, but it does make it harder to ensure that peers who agree on non-vulnerable parameters are actually able to use them.

None of this is complicated. It's also worth looking back on the history of TLS vulnerabilities to get a sense of just how little ciphersuite negotiation helped anybody.


My understanding is that Wireguard has no way to do anything other than what it does now. There is no way to use an upgrade in the protocol.


The same was true of TLS!


And that is a good thing.


not everyone, only a single network at a time. a common use will be corporate VPNs or VPNs on digital ocean-like services, in that case there is little to no reason for interoperability between two distinct network.

or at least you might want different keys anyway


Every time I see a product or project that describes itself as "opinionated", what it really means is the developer implemented the subset of functionality that they require and turn away suggestions and PRs from people who need additional functionality, even if the changes would have no material impact on the author's usage. There's probably some really interesting psychological research that could be done here, but to be polite about it let's just say that authors of "opinionated" software tend to have rather colorful personalities.

Wireguard is not opinionated, it just has a very limited scope. It has one job, to create an encrypted tunnel between two endpoints, and leaves literally everything else up to other tools to build higher-level functionality upon. Contrast with OpenVPN which requires you to be your own TLS certificate authority and all the complication that goes along with that.


I mean, that's how you choose to interpret opinionated I guess.

I see it more as "convention over configuration". If you want to (or need to) tweak the configuration and settings extensively, then that tool is perhaps not for you, and that's ok. Perhaps you are a subject matter expert, and you want more control.

If you're ok with sane defaults (that were chosen by subject matter experts, and you are not one), then "opinionated" is a great thing.


> turn away suggestions and PRs from people who need additional functionality, even if the changes would have no material impact on the author's usage.

Whether it impacts a specific use case is usually here nor there - it’s usually about maintainability. And while finding contributors for open source projects can be difficult, finding people who want to do the thankless work of maintaining code long-term is much harder.


I guess my point was that "opinionated" generally seems to imply maintainability at the (explicit and very intentional) expense of utility.


> what it really means is the developer implemented the subset of functionality that they require and turn away suggestions and PRs from people who need additional functionality, even if the changes would have no material impact on the author's usage.

That's one way of looking at it. Another way of looking at it is to emphasize minimalism, the UNIX philosophy, and keeping maintenance burdens low. Sometimes, neither is the case - Ruby on Rails being the classic example of an opinionated framework, one that did expand to add additional functionality over time.


I think most developers are technocrats at heart and this is the manifestation of that.


I’ve configured IPSec vpns for the better part of 15 years.

After using WireGuard for 5 minutes I knew this was going to be a big thing.

IPsec has too many fucking knobs. It is it’s pitfall.


I feel like a lot of design failures with new wire protocols, come down to the organization responsible for the specification not having enough leverage to convince the clients/stakeholders who will eventually implement the specification to “meet them in the middle” by adapting their systems to suit the protocol; instead, the clients/stakeholders hold all the leverage, and so demand that the specification change to a shape where it has knobs allowing each of them to implement the standard with no change to their current system whatsoever, at the expense of every other client essentially having to reify “the way each other client/stakeholder does things” in the form of each knob.

I wonder if any specification group has ever thrown up their hands and said, “you know what? Fine. Let’s just create one named sub-protocol for the way each of you major players does things; and then have the clients of this protocol do a sub-protocol negotiation; and then have the client use a plugin specific to the sub-protocol that’s been negotiated. Then you don’t need any knobs; all the policy can be baked into the plugin.”

(Come to think of it, this is kind of how the authentication phase of SSH works, when configured to use PAM. “Pretend we’re MIT” (a.k.a. Kerberos); “pretend this is a Microsoft Active Directory domain” (a.k.a. NTLM auth); etc.


I hope WireGuard can come to feature parity with TincVPN will be nice. Especially automatic routing and mesh VPN formation, it can really help our multi-cloud container clusters connected using TincVPN to be bit more performant.

The difference is WireGuard is part of Linux kernel so speed of processing packets is faster than TincVPN.

Still experimenting with WireGuard and manually creating peer to peer mesh.


One suggestion would be to submit a feature request to Tinc to add detection / support of Wireguard. Tinc could still handle the mesh routing and just hand off the encryption bits to WG. There have been some brief discussion in email threads [1] Probably more, have not checked all the archives.

[1] - https://www.tinc-vpn.org/pipermail/tinc/2017-February/004755...


Fascinating idea! I've been really interested in Tinc, but it seems to be languishing where WireGuard is really taking off. The meshing in Tinc seems quite interesting. Though, honestly, I'm pretty happy with OpenVPN.


Tailscale looks promising. (https://tailscale.com/)


I was impressed how quickly I could go from 0 to VLAN with Tailscale on Windows. Unfortunately I did not have as nice of an experience with the AUR package on Arch. However, it looks like the maintainer replied with a link to instructions on their website with how to get it working!


AUR package maintainer here (also tailscale employee). The poor experience was definitely on me. Relaynode's initial setup flow is a bit weird, and I didn't make the package explain anything. I think you were the first user of the Arch package other than me, so you got to experience the fun :)

Next release will have a better daemon, with a more typical setup flow. If you want to test drive it, `tailscale-unstable-bin` is the AUR package for it.


Is it the AUR package that is the problem or the Linux application itself?


Is tailscale open source like Tinc?

From the website cannot see it.


I am sick of people shilling to this thing here. Stop exploiting HN for free advertising. Every Wireguard post here has become a free ad for this company.

EDIT: Stop supporting parasites repackaging and rebranding open source and selling it while leaving the author who single handedly made this entire thing possible begging for donations on Patreon


You've been breaking the site guidelines repeatedly, both in this thread and unfortunately in others (and we've had to ask you about this before). We ban accounts that do that. Would you mind reviewing https://news.ycombinator.com/newsguidelines.html and sticking to the rules when posting here? The intended spirit is curious conversation.


Again, please do your job and delete astroturfing comments and ban these users. This company has been exploiting HN for so long to promote itself whenever a post about Wireguard go to the frontpage. They don't even have a ready product. This website encourages really sneaky types of marketing if you don't take action.


I appreciate your concern for the integrity of this site, but if you really care about that you should follow its rules, which say clearly what to do with these insinuations, and it isn't posting them here.

I haven't seen any evidence of astroturfing in this case. The user you were accusing above seems entirely legit.

You've posted such accusations to HN several times before. Given how little data we have about each other online, it's easy to connect the dots in a way that jumps to nefarious conclusions about others. If you come here and post those, the odds get pretty high that you're accusing innocent people of bad things. That's not cool, which is one reason the site guidelines ask everyone not to do that. We'd be grateful if you'd stop doing that.


I will stop doing that. But HN should give the priority to FOSS projects and commercial projects made by single developers and small companies that have no money or other way of reaching out to users instead of helping big companies and startups made by millionaires. As of Wireguard case, if you've been following all popular threads about it throughout the last 2 months you will know what I am talking about.

Don't let HN to become another ProductHunt.


Last time I talked to him about it, Jason Donenfeld was not upset about Tailscale. You'll have to find someone else to be vicariously outraged for.


Of course I am sure he is extremely happy spending 5 years developing the next big thing then others rebrand it for enterprise and become rich.


Jason comments here all the time and is quite easy to talk to, and I think we're all better off hearing from real Jason, not some imaginary angry Jason you've invented. Not least because there are actual abuses in the WireGuard ecosystem, and your imaginary Jason is obscuring them behind fake abuses.


I didn't say he is angry, my main point was not advertise for free for a company that is built off the work of a single man, especially when it's FOSS, especially when it's a technically piece of genius like WireGuard, while not supporting the man himself. There are known ways for companies to market their paid product, it's called ads.


You realise most VPN providers offer OpenVPN (for years) and now Wireguard support right? Some bundle the software & drivers on their own GUI. I'm happily using a paid OpenVPN client for mac (won't name it here to avoid upsetting you further) for years.

What I'm trying to say is that tailscale is doing nothing new or sinister.


This is not how opensource works.


Actually, I haven't heard of them and they look like a pretty nice way of connecting different resources over the internet. I am not sure what you mean by parasites. I am glad they posted the link in this thread. Wireguard is an opensource project, Tailscale is a paid service. Do you think they compete with each other or you think that people should not share product recommendations at all on HN?


One could just as easily make the opposite complaint that WireGuard is receiving undeserved hype because it is just a tunneling protocol without a control plane.


I am not saying I don't want those features, but I do hope that a VPN in the kernel with one of its primary features being small, lean, auditable code, will think twice about adding a bunch of stuff.

Have you considered Nebula?


Most of tinc's functionality could be implemented in userspace using the wireguard kernel module, and so wireguard itself won't really need to grow.

The one thing I wish wireguard had was overlapping AllowedIPs with 'ip route via' to distinguish (although this isn't a tinc feature either, unless you run it as an ethernet segment). The same result can be achieved using a separate interface for every peer and 0/0 at each end, but it's a bit unwieldy.


For WireGuard there's the official https://github.com/WireGuard/wg-dynamic, or also this: https://github.com/costela/wesher.

Or generally for mesh networks as others mentioned: Slack's Nebula, ZeroTier (wait for v2) or Tailscale.


k8s already uses Wireguard for the service mesh internally.

For a globally routed overlay mesh, have a look at https://yggdrasil-network.github.io/

The latest version actually uses the Wireguard TUN library https://yggdrasil-network.github.io/2020/02/21/release-v0-3-...


> k8s already uses Wireguard for the service mesh internally.

Kubernetes does not use anything for the service mesh internally - as it does not provide a service mesh.

By this I can only assume you mean some popular CNI provider uses wireguard, but out of the ones I know of (flannel, weave, calico, canal, romana) I don't believe any use wireguard.


> k8s already uses Wireguard for the service mesh internally.

No it doesn't. There are some CNI plugins that use Wireguard, but it's not standard in any way (like any other network plugin, really).


I think Tailscale [1] can be to WireGuard what Github and Gitlab are to git.

If you haven’t checked them out yet: worth taking a look!

[1] https://tailscale.com


Yggdrasil and GNUnet seem like more interesting alternatives to me (with cjdns and hyperboria as cousins). Although these do not build on wireguard, they employ the same basic stack (plus some to handle a mesh topology).


Is that like Hamachi? https://vpn.net/


What is the difference between this and ZeroTier? https://www.zerotier.com/



Works only with some identity providers, for some reason I could not understand. If you don’t have a Google, Microsoft or corporate email identity provider, you can’t use it.

There might be a reason for this, but still it’s not in the same space as Wireguard.


young company, cover a large amount of users with a simple and quick solution

using google/microsoft account is easier than setting up a user service that is comparable in security and ease of use


[flagged]


I was setting up a Raspberry Pi (3B+) at the office and wanted to connect to it from home.

Researching various approaches lead me to a hn thread recommending Tailscale, signed up, worked for me & now I’m a happy user and planning to get 2 additional new Pis tomorrow (still deciding between the 2G and the 4G model).

edit: I regularly recommend things I find useful to others (especially those that took me some effort to discover) independent of whether they are bits of insight (try x instead of y), open source software or for profit software.

Just thought others might find this useful as well, that’s all.


Accusing someone of shilling is contrary to HN comment guidelines.

"Please don't post insinuations about astroturfing, shilling, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email us and we'll look at the data."


Is there a version of Ubuntu that has GUI NetworkManager support for WireGuard? I’m missing the convenience of toggling the VPN on and off from the system menu.


Versions >= 1.20 have support for all the bits and pieces (including routing all traffic). Initial support landed in 1.18.


If that is the case, it looks like Ubuntu 19.10 and later have support.

https://packages.ubuntu.com/search?keywords=network-manager&...


if it does, I don't see "how"


I don't know try to type "Wireguard NetworkManager Ubuntu" in your favorite search engine?

https://blogs.gnome.org/thaller/2019/03/15/wireguard-in-netw...


I don't know try to read parent's comment again and see it's about GUI?


this doesn't even work for me in CLI. I'm able to create a connection from a wg0.conf file, but it's not coming up, while regular wireguard tools work just fine


They've backported Wireguard to the next LTS (20.04 aka Focal Fossa) kernel, so wait a bit. Officially, it's due 23rd April.


KDE does support configuring and enabling/disabling WireGuard over the GUI.


Agreed. I have Wireguard installed, but am positive I don't have it configured properly.


it's pretty hard to misconfigure it, and then it either works or it doesn't.


Or login management that doesn't involve pre-shared keys?


A keypair is not a pre-shared key; but if you're looking for a tool to pull public keys (and peer AllowedIPs) from some other central source of truth, a script might be the right call for now.


Plasma Desktop does.


In case anyone wants some user-facing Wireguard docs with examples and further reading, I've compiled some here:

https://github.com/pirate/wireguard-docs


Been using this thru Streisand [https://github.com/StreisandEffect/streisand] and can honestly say it is an excellent experience. Would highly recommend deploying through Streisand / Ubuntu 16 if you'd like to experiment (and run a production setup!)


What are the options right now for open source user/access management around WireGuard? I don’t love the idea of manually writing down keys in a config file.

I’m thinking of writing something to template out configs for short term keys (and automatically reload) based on an OIDC authentication, but seems inelegant.


Could create something where you authenticate to vpn.yourdomain.com in your browser using your preferred method that creates a temporary key and starts your wg client for you.


Yeah that’s what I’m thinking, creating the temporary key by manipulating the config feels inelegant though.


While I don't believe WireGuard is a drop in replacement for IPsec tunnels or OpenVPN I think it is a great solution to add a VPN tunnel back to your home network. I am running a WireGuard server on an Unraid server and it was trivial to setup and I can easily hit near gigabit speeds through it.


> While I don't believe WireGuard is a drop in replacement for IPsec tunnels or OpenVPN

Why?


There's no predefined way of setting up and sharing keypairs, for one. As a company end user logging into a VPN, what you want is a place to input your username and password (and potentially 2FA credentials), not “create a keypair and give the public key to an admin”.


It's true that the WireGuard ecosystem needs these features. But it's also true that people believe VPN software needs lots of features because other VPNs are complex; people do not generally believe these things about SSH, and WireGuard makes VPN tunnels as easy to manage as SSH.

Another thing people might not realize if they haven't had to deal with lots of different VPN configurations is that most of the "user management" and "2FA" features of legacy VPNs are, as the kids say, janky "AF".

Ultimately, organizations should be tying their VPNs, like everything else, into an IdP of some sort, and most of the "user management" and "MFA" stuff belongs to the IdP, not the VPN. People will clearly get WireGuard integrated into Okta.


> Ultimately, organizations should be tying their VPNs, like everything else, into an IdP of some sort, and most of the "user management" and "MFA" stuff belongs to the IdP, not the VPN. People will clearly get WireGuard integrated into Okta.

Right, but at the moment this integration does not exist.


I'm not disputing that.


> what you want is a place to input your username and password

You'd be best without a username a password, just a code.


I am mostly talking about in a business setting. WireGuard hasn't even hit its first "official release". A company is not going to switch to something that has not been thoroughly vetted. Also a lot is going to have to wait on vendor support, like incorporating WireGuard into something like Cisco AnyConnect.


this cisco anyconnect system? https://www.cvedetails.com/vulnerability-list/vendor_id-16/p...

doesn't seem very secure to me.


CVEs mean that a company can take action to mitigate a vulnerability. Wireguard is not mature enough to have something like that. A known vulnerability is bad, but not nearly as bad as an unknown vulnerability.

This is not a knock on Wireguard, I use wireguard and love it. It just has several hoops to jump through before it is ready for widespread adoption. Like NIST approving it to be used instead of IPsec or OpenVPN.


I don't know what this means but can't think of an interpretation that isn't false. WireGuard will certainly do a better job mitigating vulnerabilities than Cisco will, and WireGuard's code will for obvious reasons get more attention than Cisco's horrible VPN code.

It's true that Fortune 500 companies aren't going to deploy WireGuard. They're constitutionally incapable of deploying security gear that isn't awful, which is why a huge fraction of all VPN deployments through the F500 were backdoored in the 2000s.

NIST is never going to approve WireGuard; it's not even a discussion worth having, nor is it NIST's place to certify which VPNs are or aren't safe to use, nor does NIST have the staff to do anything like that.

That's no reason for startup engineers to make the same mistake. Startups definitely do deploy WireGuard.


Will take another look at wg once keys can be stored in non-exportable way on devices, or temporary keys generated per session after passing some auth mechanism. Sounds like a fun personal project.


my point is that it would appear that "Anyconnect Secure Mobility Client" has a shitton of vulnerabilities. sure, wireguard may have some vulnerabilities, but you don't need a formal audit to tell the difference between "this might have some issues" and "holy fuck this is a fucking dumpster fire". you need an audit to tell if "this might have some issues" is "this has some issues" or "this is actually pretty good". in particular, "Anyconnect Secure Mobility Client" appears to have a significant number of local privilege escalation exploits, some several dozen since 2011. that doesn't necessarily mean that the protocol is shit, but it probably does. it probably means that no serious security professionals have examined it, and the vulnerabilities that have been found are just the easiest ones that can be found with a scanner.

but even ignoring all of that, wireguard has significantly better security guarantees. https://www.wireguard.com/formal-verification/ claims that "WireGuard has undergone all sorts of formal verification, covering aspects of the cryptography, protocol, and implementation." with references to several formal proofs of the protocol.

furthermore, wireguard has actually received a CVE: CVE-2019-14899, which was posted here only a few weeks ago. it's not wireguard-specific though, it's a general problem with VPN setup on general-purpose operating systems.


I'm sure you know this but, for the benefit of others...

With this exception, WireGuard does not CVEs because it is (for now) still considered pre-release software and not recommended for production use.


CVE does cover "pre-release" software, part of the argument being you can't simply label something as "beta" and escape CVE coverage especially if millions of people are using it (Google's Chrome web browser was a good example of this). For example:

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=prereleases


Sure, but Donenfield has specifically declared that CVEs not be issued for "pre-release" Wireguard components[1]:

> Current snapshots are generally versioned "0.0.YYYYMMDD" or "0.0.V", but these should not be considered real releases and they may contain security quirks (which would not be eligible for CVEs, since this is pre-release snapshot software).

[1]: https://www.wireguard.com/


Also to reiterate: "which would not be eligible for CVEs, since this is pre-release snapshot software" is not correct. It's usually correct, but not 100%.


Yeah and he's not the boss of CVE. If someone wanted a CVE for wireguard I'd be happy to help them get one.


Sorry but... no. You are wrong. Completely wrong.

CVE is just an identifier for a security vulnerability.

"Common Vulnerabilities and Exposures"

CVE generally covers released software, hardware and (in the process or being added officially) services. It also covers beta software (https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=beta).

The reason Wireguard doesn't have CVEs is nobody has bothered to request them.

For more details on CVE there's a bunch of episodes covering it: https://www.opensourcesecuritypodcast.com/search?q=cve


In reality WireGuard's only valid selling point is that it's easier to use than alternatives.

It is not, for example, faster than the literally dozens of ASIC implementations of bump in the wire IPsec that scale to line rate n * 100Gbps. It's not even faster than kernel IPsec on architectures which support AES with dedicated instructions. Incidentally, this is the default configuration for Strongswan on supporting hardware.

In fact, the benchmarks published prominently on the WireGuard website (https://www.wireguard.com/performance/) are completely nonsensical and compare the clients under circumstances where neither should be bottlenecked and where it is known the underlying algorithm with AES-NI is considerably faster than chacha20-poly1305. Yet, they find that WireGuard achieves higher throughput (magically exceeding line rate, in fact!) in spite of more header overhead and a slower algorithm. They find WireGuard is lower latency by something like an order of magnitude more than the actual single packet latency for an IPsec compression with AES-NI. When I find the official hard data about a product to be complete bullshit, it raises a lot of red flags for me. Either the authors of this marketing fluff are completely ignorant or completely dishonest, but in neither case does the material motivate interest in the product.

It's extremely debatable that it's more secure than common IPsec implementations. The core IPsec implementation is a very simple state machine which has been under review by virtually everyone with an interest in secure comms for decades. It's very wishful to suggest that some hipster shitware that got puked out a few years ago because Strongswan was too hard is "more secure".

If you're in the position of designing a secure interconnect for something more consequential than a friend accessing your home media server and do not have the luxury of abdicating responsibility for the outcome, the fact that the client is easy to use is perhaps the single lowest bullet point on your list of priorities. Interoperability with existing software and hardware, flexibility to adapt to different customer environments and requirements, maturity and proven performance all rate much more highly. IPsec is and has been the go-to for that, while WireGuard is drenched in hype and bullshit and completely unproven.


Those benchmarks are weird. We did a test, just for fun, comparing wireguard and IKEv2 using strongswan (using ike=aes128gcm16-prfsha256-curve25519!). Strongswan came out ever so slightly faster, but with a SHITLOAD of retries. We tried different things, but the only way we got the retires down was switching cipher to chacha20-poly1305 (which made it sliiiightly slower than wireguard). There was basically zero network latency in this test, which makes me wonder how IKEv2 would have looked over the real internet.

As you said: I am not a network admin, so I probably botched something. Which, I guess, is another point for wireguard in my book. It lacks many of the bells and whistles of IPsec, which means less to configure for the average stupid home user (me).


So when they say it will be embedded into the Linux Kernel, what does that mean exactly? Does that mean I will be able to open a terminal an type: WireGuard and from then on my connection to the internet will be secure so long as I don't close the terminal or what?


> Does that mean I will be able to open a terminal an type: WireGuard and from then on my connection to the internet will be secure

It's more like how iptables/nftables is part of the kernel. You need a recent kernel along with user space tooling. But it will become part of virtually every Linux distribution.

As for "my connection to the internet will be secure" - that's possible, but the main use case right now is "my connection to my vpn/server will be secure".

Additional configuration is required to route all traffic through the wireguard tunnel, and make sure all other traffic is dropped - and to make sure all traffic is dropped, rather than sent in plaintext when the tunnel goes down (a "kill switch").

I'm sure we'll see many tools and scripts that will help automate such setups.

But if you just want a udp routed VPN - you might want to look at zerotier or tinc.


This may sound like a newb question but.. is my connection to my vpn/server not already secure on Linux?


Wireguard is one way to secure your connection to another machine. That machine could be a "VPN server" - which typically mean one of two things:

1) By connecting to that server, you get access to private resources as if you were on the same network. Say, access to a printer, a web camera or a file server that aren't exposed to the Internet.

2) You gain access to the Internet through that server, so that your publically visible IP changes. This prevents anyone between you and the server to see the content of your traffic (eg: your isp, the hotel it staff that runs your free wifi). It can also grant you access to resources that are exposed to the internet - but filter access based on IP. Such as a CRM system, or webmail system.

If you are connecting to a VPN server, then, by definition (virtual private network) - your connection should be secure. Wireguard is one way in which that access can be secured - and it's new/modern, simple and widely regarded as following best practices. Alternatives are ipsec via eg strongswan, OpenVPN and a bunch of nasty proprietary solutions constructed out of mixes of libssl and obscure hedge magic.


It's a kernel module. What you do in practice is to have a configuration file somewhere and then have a command line tool (wg(1)) read that file and have the wireguard kernel module create a wg0...n network device.

The out of kernel alternative is how OpenVPN has always worked, by creating a TUN/TAP device, which basically creates a pipe<->network device bridge, and a user space process reading/writing from that pipe.


If you have the right config-files setup, it's a matter of typing 'wg-quick up <configname>' ... and then you will have a VPN up. Bring it down again with 'wg-quick down <configname>'.

The configs itself are basically just the IP's used, and the peers and keys used to communicate with those peers.

(Running a 70+ node full-mesh vpn)


> 70+ node full-mesh vpn

~2^70 VPN connections?


Wireguard is not connection based, so a full-mesh VPN with n nodes essentially just means each node has n-1 peer keys and maintains a routing table with n-1 entries for the VPN.


There are still 70x69/2 connections going on under the hood. You can't completely ignore that complexity - if there are underlying connectivity issues, specific host-host traffic will fail (eg due to NAT).

My wireguard setup actually has n x (n-1) config files/instances (current n=8), as that's the only way to shuffle default-gateways over it. It's a little unwieldy, but still manageable. Whereas I can't imagine doing the same with OpenVPN.


Wouldn't it only be 70^2 (or 69^2 or 69*70 maybe?)? Each of the 70 devices has a VPN connection to 69 other devices. So only ~5,000 total


It would be (69*70) / 2, since connections are bi-directional, so only ~2500 connections.


wireguard uses UDP so they're not actual "connections"


Yeah this is a critical point that the others are ignoring, it only sends packets when they're actual traffic. Maintaining 5000 connections is pretty easy when they're not big stateful TCP sockets.


That's right, I got the two mixed up.


What makes Wireguard more secure? The article appears to make some weak claims about a smaller codebase and less configuration options but I don't think that translate directly into it being more secure?


* It uses a single set of well-trusted modern primitives and so avoids the attack surface of negotiation.

* Those primitives are used for a Noise construction, and Noise is itself reasonably well studied and increasingly formalized; we can be somewhat confident WireGuard is skipping over the 2 generations of protocol vulnerabilities SSL/TLS faced.

* Perhaps most importantly, the codebase is tiny and designed to minimize its attack surface; for instance, the protocol itself is designed to be implementable without dynamic memory allocation.

* WireGuard is itself minimal and doesn't implement higher-level features like user management, which means that those features aren't coupled and entangled into the core engine, and can be implemented straightforwardly through a clear interface.

In general, and contra this article, "smaller codebase" usually does mean "more secure".


The main idea (which has a fair amount of merit!) seems to be: If you give people too many knobs, they will invariably get confused and turn them the wrong way, creating an insecure configuration.

E.g., IPsec has a “none” cipher!


the "none" cipher isn't even that bad... if you do a packet capture, you can clearly see that the data is unencrypted. the worst part about IPsec is that there are many modes which look secure, but actually aren't secure at all. examples: encrypted but unauthenticated packets, encrypted but unauthenticated channel negotiation, encrypted by default but downgradable cipher negotiation...


This is why I abandoned using it, knowing the average quality of an online article I couldn't trust that the configuration was secure and there were no official very secure templates.


This is a problem with the IKE implementation. A secure IPSec configuration on OpenBSD is a single line, and you can copy+paste it from the excellent man page.

Part of what makes WireGuard "simple" is that it doesn't support any kind of key management--i.e. PKI. Instead you're expected to copy keys around manually. IKE is the most complex part of the IPSec software stack but in many ways the most important part.

Ironically but entirely predictably, people are using homegrown scripts and proprietary third-party services to replace the missing key management aspect of WireGuard. When these turn out to be insecure, or at least the weakest link in the chain, nobody will ever blame WireGuard, even though it will be a predictable consequence of using WireGuard.


> on OpenBSD is a single line

Nice, but it would be nice to know if that is the default or not on Linux as well.

I don't agree with the claim that IPSec somehow automates PKI, it's still very disgusting compared to things like (LetsEncrypt's) ACME. I really hated the PKI on Linux, especially when trying to revoke old keys than on Wireguard. The fact that clients also differed heavily in what they supported was also very annoying.


> Nice, but it would be nice to know if that is the default or not on Linux as well.

It's because OpenBSD uses a much nicer, more declarative configuration file syntax, whereas the options on Linux, like Openswan, use a less expressive key-value syntax. To be fair, AFAIU Openswan supports more IKE extensions, and is an older project with more baggage than OpenIKEd or OpenBSD's ipsecctl configuration compiler front-end for isakmpd. But that only highlights the fact that much of the complexity of IPSec is due to history, not because IPSec is intrinsically too complex to make it useable. The SLoC of IPSec kernel code are comparable to the SLoC for WireGuard kernel code. There are smarter ways to implement IPSec and IKE, especially when you have the benefit of hindsight.

> I don't agree with the claim that IPSec somehow automates PKI, it's still very disgusting compared to things like (LetsEncrypt's) ACME

It doesn't automate CA renewal, but you can't even do any kind of PKI using WireGuard as WireGuard doesn't support key signing or key authorities.

FWIW, OpenBSD provides a utility for generating and manipulating X.509 certificates for use with IKE.[1] I've never used it as I'm unfortunately quite familiar with PKIX infrastructure and have my own tools, but AFAIU it's what most people use.

None of this is to say that, when comparing apples to apples, WireGuard isn't a better protocol than IPSec. But SSH also has warts and it would be trivial to come up with a better replacement protocol. We don't need to because we have OpenSSH, a smart implementation that continually discards as much baggage as it can, while still interoperating with a wider ecosystem of alternative implementations.

The fundamental problem is that 1) key management is hard, 2) key management is critical to overall safety and usability. WireGuard sidesteps all of this. It looks great on paper because it's only solving the easiest problem. And it seems great in practice because the ugliness of the ancillary infrastructure isn't counted against it, even though from a wholistic standpoint it should.

[1] https://man.openbsd.org/ikectl.8#PKI_AND_CERTIFICATE_AUTHORI...


> The SLoC of IPSec kernel code are comparable to the SLoC for WireGuard kernel code.

I haven't checked whether this is true, but even if it is, that's a damning indictment of IPsec, because on Linux, the entire connection establishment is in userspace, and the kernel only handles per-packet encryption and authentication. WireGuard has the entire negotiation sequence, authentication, routing, timeouts, rekeying, etc in the kernel. With IPsec, you need to have a userspace daemon to manage all of that, with significantly more LoC than the bare per-packet essentials. With WireGuard, you just load the keys into the kernel and you're done. I bet that you're also counting Zinc against WireGuard, but not counting the entire crypto API against IPsec (which, unlike with WireGuard, you might end up actually using).

I suspect that it's not actually true though in the first place, once you add in all of the other stuff like iptables -m policy that only exists to support IPsec.


People say complicated PKIs are the most important parts of systems because they are complicated and hard to work with and people get invested in them and all the time they've sunk into them. But in reality, far more people have been secured by SSH keys than by IPSEC keys, and by Signal than by S/MIME.


Yet Web PKI trounces all of them combined.

Signal isn't an apposite comparison as Signal implements key signing and key exchange with Signal as the sole certificate authority. Who do you think attests to the authenticity of phone numbers, and how do you think they do so? Indeed, Signal exemplifies exactly what I was saying: key management is crucial, key management is hard. Secure, trusted key management is like 90% of Signal's value-add.

At large organizations SSH is often used with signed X.509 certificates. OpenSSH resisted the feature request for years, but the demand was overwhelming. The rise of products like Teleport and ScaleFT are perhaps best characterized as extremely convoluted combination key management and VPN solutions.

You say "complicated PKI", but my point is that WireGuard has no PKI. In any event PKI is intrinsically complicated. Take WireGuard and add the simplest possible trusted authority scheme on top and you've doubled the conceptual complexity by 2-3x, and possibly the SLoC, too.


Neither Signal nor SSH in their most common mode of use have a "PKI" in the sense you mean, which is my point. The Web PKI is something we live with because we have to, not something anyone sets out to re-create.

There are organizations that benefit from a PKI-ier deployment of SSH, but even there, the "I" part of the PKI is extremely attenuated, and most of the real interesting work is done by a single centralized point of trust that mints time- and usage- limited token-equivalents. They're not trees so much as they are vines or fungus colonies. They're great, but they're certainly not a vindication of the 1990s concept of a PKI.


> Neither Signal nor SSH in their most common mode of use have a "PKI" in the sense you mean, which is my point.

I assume then that you exchange the public keys of your Signal contacts over SMS. Over perhaps you scp them to a server you share with friends.

> The Web PKI is something we live with because we have to, not something anyone sets out to re-create.

We have to because it's crucial. If public key attestation didn't matter we would have dispensed with the lock icon and "this certificate is untrusted" popups rather than laud the emergence of Let's Encrypt and the ACME protocol.


> > Neither Signal nor SSH in their most common mode of use have a "PKI" in the sense you mean, which is my point.

> I assume then that you exchange the public keys of your Signal contacts over SMS. Over perhaps you scp them to a server you share with friends.

You are meant to verify your contacts' keys (aka "safety numbers") either in-person or otherwise out-of-band. The Signal server does not in any way sign or endorse the identity keys they serve to users (yes, they're delivered over TLS but that doesn't count).

Yes, Signal does do some best-effort verification of your phone number when you register a device but that's just to avoid DoS. SMS can be easily intercepted.


This is responsive to essentially nothing I wrote.


The main argument in the article (and other places I’ve seen WG discussed) is the relative ease of auditing the core code as well as auditing implementations. In that context it’s less of an augment that it’s “more secure” and more of an argument that it’s “more cost/time effective to assure that it (the core code or Any implementation) is secure”.

That argument can be strong when considering that effective security in most projects comes down to whether assurance of security can be discerned effectively within a limited time window. Often very limited.


Those help it be more secure without being a proof of security.

Fewer configuration options/smaller codebase mean you have less to screw up while programming, less to read while debugging, and fewer opportunities to get things wrong while deploying.


That's not really a weak claim: https://stackoverflow.com/a/56043694 (citations in Code Complete)


The article says the smaller codebase doesn’t make Wireguard more secure, but it only makes it easier to audit.


Smaller codebase means less chance of bugs.

But I agree, it should really be audited properly before this statement can be made.


WireGuard has been extensively audited many times for the past 4-5 years, including several formal proofs[1]. I would argue no other VPN has been as thoroughly audited (not to mention that the codebase size means that an entire-codebase audit is actually possible). That doesn't mean it's perfect (and it has had bugs), but it's definitely exceptionally well designed and written code.

In addition, the crypto design (beyond it being opinionated and thus no way to misconfigure into using the "null cipher") is arguably much more secure by design than other systems. For instance, WireGuard eliminates entire classes of vulnerabilities through careful protocol design while also adding fairly neat features (such as being impossible to port scan) -- the author explains this much more eloquently than I can[2].

[1]: https://www.wireguard.com/formal-verification/ [2]: https://www.youtube.com/watch?v=CejbCQ5wS7Q (about 23 minutes in)


Freedombox makes wireguard so incredibly easy to deploy: https://raymii.org/s/tutorials/Wireguard_VPN_on_Freedombox.h...


Can't wait to not have to faff about with configuring strongswan/ipsec for roadwarrior configurations anymore. Setting up WireGuard server and clients on notebooks and phones was a breeze. Really a huge leap for me in terms of usability.


TL;DR: Should I keep fussing with PiVPN or try something like TincVPN?

Semi-OT: So I just installed PiVPN to use with this protocol to try and do a small vpn at home (all I want is to go to my domain, auth, and be on my LAN so I can RDP / VNC) and the wireguard bits worked great, and the install process was buttery smooth, even on a Raspbery Pi Zero W.

But - my network lack of knowledge is probably hamstringing me. I opened the WG port on my router and confirmed the dns hostname I'm using corresponds to the public IP, but I'm not able to get the wireguard clients to connect. The tcpdump doesn't show any incoming traffic on the port at all.

Should I keep fussing with PiVPN or try something like TincVPN or Tailscale? I have not been able to get a VNC or RDP session going over tailscale even though all my machines are able to connect to the Tailscale network.

I want to use wireguard, everyone says it is so good, and OpenVPN does seem a bit boring, but ultimately I'm just hitting a wall when it comes to the use case of 'auth, you are on your home lan, connect as if you are at home connected to wifi'


Make sure that the port is correct and it is UDP (not TCP).

(I just did the same setup with PiVPN. Somehow I got a wrong port number first, but then it worked)


OK it defaulted to UDP, got nothing, changed to TCP, got nothing. Will change it back and try again.

I will also double check the port number.


While you're at it, check and double check your port forwarding settings. I got bit by this recently.

My owned router had the right ports opened, but the AT&T bridged router did not. Be sure you open ports on both sets of routers, otherwise your owned router will never have a chance to allow the traffic in the first place.


Have you tried using the tailscale interface IPs (100.x.x.x)? I've been able to use VNC over those addresses.


I have, but I've not been able to VNC over them. I've made sure the FW rules let tailscale do anything and have even tried turning both firewalls off entirely but at least for TightVNC it doesn't like it, nor RDP (RDP enabled in System settings).

But this does give me hope that it can be done in the first place, I just have some setting goofed up somewhere. I will restart my tailscale stuff now that I know someone out there has done it.

Thanks!


pivpn is easiest way to set it up.

Great tool for newbies like me.

QR setup for mobile also available.

https://github.com/pivpn/pivpn


Will be great when this becomes mainstream & hopefully common place.

A ton of links are conceptually point to point but not encrypted as such because existing means are a pain in the ass


WireGuard is nice and fast indeed, but unusable for me at work, because pretty much all outbound UDP-traffic is filtered.

Having a TCP-based option sure would be nice.


I use wireguard on top of udp2raw to power through UDP-blocking firewalls all the time

https://github.com/wangyu-/udp2raw-tunnel

Added bonus: it's not TCP


What a cool solution! Thanks for linking it.


I'd be fairly confident your work doesn't want people making random VPN tunnels from their work laptops.


Probably not but nobody said it was the work laptop just at work. It'd be pretty hard to get WG installed on a work laptop in the first place.

My work does the same kind of thing on the guest SSID, drives me nuts.


I've had consistent success running wireguard over 443 on a wide variety of otherwise limited networks now that quic has been around a while.


Yes, this is a concern as as I'm testing it as a daily driver there are quite a few public networks it won't work with. For non technical end users it's a show stopper.

Without TCP/IP you're back to running dual wg and ovpn services and pretty much where we are with ip4 vs. 6. One is 'better' but the other works everywhere.


Are there any official plans for 2FA in Wireguard?


My understanding is that the plan for WireGuard is to nail the engine and present a clean interface to system integrators, who will build their own authentication systems on top of it. The most sensible way to do MFA for WireGuard is probably though an IdP.


While we wait for something proper, you can always "patch" it on top of the connection like a quick PoC I made: https://github.com/qzio/w2fau2f

Also, see: https://lists.zx2c4.com/pipermail/wireguard/2017-September/0...


I think at this time, the only option you'd have is a captive portal for the 2FA, and then unfirewall the in-VPN IP. WG has no provision for "connection state", only "handshake happened X seconds ago". If there's no traffic, there's no handshake.


there is no authentication that would need a second factor in wireguard. in wireguard you authenticate the host, not a user


actually there is no athentication in wireguard. only identification


Each node has a list of public keys of nodes that it authorizes to communicate with it. Those nodes authenticate (provide proof of their identity) themselves via the exclusive ownership of their private keys.

So I don't see yout point.


What would the point of 2FA be? Interested as a use-case I don't quite follow.


If you are using a VPN to access a sensitive network (home or office), you want to make it harder for an attacker to steal keys or passwords to the network (especially since any roaming devices are more vulnerable to evil maid attacks). 2FA through a token or phone apps means they now need to compromise two devices instead of one.


OK, I get you. I think we're coming from different angles. You're concerned about getting into a network, I'm concerned on not identifying identification.

Thanks for the reply.


I reviewed vpn solutions on Linux a few weeks ago and Wireguard was the easiest to setup among the secure protocols.


I've used IVPN for years and it works flawlessly.


Don't forget to support Jason, WireGuard's author, on Patreon. https://www.patreon.com/zx2c4


Wow, 10k$/month is a lot more than a 'sustainable full-time job' would pay :) At least here in Europe.

But of course what he's getting now ($1212) is nowhere near that.


$10k a month is a 'sustainable full-time job' for a senior developer once you factor in health insurance and the additional 7.65% self-employment tax.

I prefer this model for opensource software. We get an awesome product and he gets enough money to sustain himself while maintaining it. Seems like a fair deal for all.


That's quite a bit less than someone with his skillset could earn in the US. He could easily earn 2-3x that amount in Silicon Valley, and possibly quite a bit more.


Ok I didn't know that. I heard IT wages in the US were good but I didn't think they were that good. Here in Spain you're doing well with $2200 and that's a senior position at a big multinational.


This is more or less what a senior Frontend dev usually makes in the US. The author single handedly made the first serious FOSS VPN that can replace IPSec and OpenVPN


   $ systemd
   $ wired


Recentish negative article about Wireguard:

* https://blog.ipfire.org/post/why-not-wireguard


> Is WireGuard faster than other VPN solutions?

> ChaCha20 is a stream cipher which are easier to implement in software. They encrypt one bit at a time. Block ciphers like AES encrypt a block of 128 bits at a time. ..

Wow. I'm avoiding wireguard for other reasons, but there is a lot of FUD in that article.


Very interesting read.


They quote Mullvad in the article...aren't they now basically fucked with that new invasive Swedish law that was passed recently? Sorry, don't have a source but saw it the other day.


Not so sure, as lonng as you can not disable logging:

https://www.perfect-privacy.com/en/blog/wireguard-vpn-pros-a...


Their use case may require it, not true for others.


What they want to do, cannot be done by Wireguard, because Wireguard does not have the concept of "VPN sessions / connections". What they probably need to do is to assign each customer a fixed private IP for use within their VPN, e.g. from 10.0.0.0/8.

When those are not enough any more, they need to segment their VPN, so they can re-use the private IP space in each segment.

w.r.t. to "NeuroRouting and TrackStop not possible", they could route their stuff through a TUN interface to do whatever they want to do in user space. With a performance cost.


This is a common critic of WireGuard, but it looks like those service are looking for excuses to explain why they don't propose WireGuard yet. As far as I understang it:

> What they probably need to do is to assign each customer a fixed private IP for use within their VPN, e.g. from 10.0.0.0/8.

Actually, they can set a different IP for each session and rotate them by given it to the client out of band, for example when it authenticates to the service.

> When those are not enough any more, they need to segment their VPN

Like with all other VPNs right? They could also distribute IPv6 for the tunnel and this would not be an issue.


> Actually, they can set a different IP for each session and rotate them by given it to the client out of band, for example when it authenticates to the service.

Like I said, Wireguard does not have the concept of sessions. You could add your own proprietary "stuff" around Wireguard to add that concept, but then you don't need anything extra from Wireguard. You add the keys of the users as part of the session setup and remove them when the session is destroyed. Of course, this means that clients have to use a client tool provided by you.


There is a handshake at most every two minutes. Is it not possible to say e.g. fetch a new key if the last handshake was an hour ago?


What I don't like about WireGuard:

- Basically no real user or admin-oriented docs. There's some example configs and some getting started guides, and then some crypto-nerd look-how-secure-our-algorithms-are docs, but no real guidance on how to set up a reasonably simple network of hosts.

- Authentication/authorization is just IP addresses and public keys? What about users and service accounts that you want to rotate the credentials of? What about SSO? What about fine-grained access control? What about <insert all of the enterprise things>?

Big static keys and open-ended authorization by default are really not where we should be going with modern security practices. If I just want a layer 3/4 tunnel with public keys, SSH already does that. Sure, WireGuard is basically "SSH plus some easier routing", but I don't need an iteration on SSH, I need an iteration on OpenVPN, which can actually support most enterprise needs. The SSH (and WireGuard) model doesn't scale, due to a lack of functionality.


If you want SSO, or fine grained access control, the idea is you would do that at a level above wireguard. For example, I'm prototyping a small CLI that talks to hashicorp vault via OIDC/OAuth2, and then creates a wireguard key pair + configuration locally, submits the public key to vault, and then the wireguard "server" is configured with a simple daemon that pulls all the public keys from vault and generates a wireguard configuration allowing access from those public keys.

This is a simple example, but much of what you need to do can be done with layers on top. This is similar to iptables, in that you can use `firewalld`, or `UFW` which all use iptables under the hood.


Have a look on how Cloudflare's WARP vpn handles this: https://github.com/aghorler/cloudflare-warp-wg-client


It sounds cool, but it also extends the amount of components that have to be made resilient to failure and attack. Your HA vault+consul clusters, HTTPS & OAuth2, key generation, and automation pieces (inc. message passing & load balancing) all need to be working correctly. Compare that to a single stateless server which spits out an OAuth2 login url to a client, receives a token once the client is authed, and opens a connection with that user's specific network authorization.


Agreed, but as OpenVPN has shown us, it's not a guarantee that security will be any better if everything is contained within a single process.

It's also possible to use something other than Vault, you could use LDAP for example, but Vault lets you use multiple authentication mechanisms, and can be used for other purposes, so it's kinda a multi-tool. Additionally, I'm not using consul, just Postgres on the same host as Vault.


Are you going to open source it?


If it gets anywhere yes, I've been a bit busy with life and it's been in a pause. The other aspect is this isn't super useful yet because I still need to implement the server side, as well as figure out how to make this useful as non-CLI for my mobile clients (iOS), which are the majority of what I use the VPN for.


WireGuard is a networking primitive. It creates a secure tunnel between two points and only contains the things necessary to make that happen.

The rest of what you're asking for can be implementing on top of WireGuard in external tools and I fully expect will be over the next few years seeing the adoption so far.

And given you mentioned SSH... That's basically what it is, yes, layer 3 forwarding over SSH except done in a way that's actually usable. (For issues, see e.g., http://sites.inka.de/bigred/devel/tcp-tcp.html)

This is a step back towards the unix philosophy of providing simple pieces that can be stacked together to build more complicated things, not a drop-in replacement for OpenVPN et al. Most of us are excited because (1) not everyone is an enterprise, and that "we need everything and the kitchen sink" philosophy makes OpenVPN annoying to work with for everyone else and (2) we'll probably see a lot more options cropping up in this space soon built around WireGuard now that developers have been given a secure primitive to build upon.


>Basically no real user or admin-oriented docs. There's some example configs and some getting started guides, and then some crypto-nerd look-how-secure-our-algorithms-are docs, but no real guidance on how to set up a reasonably simple network of hosts.

It wasn't really in release mode until it was merged to the kernel, so that's pretty understandable. I had no networking knowledge aside from pentesting and I was able to get a tunnel working.

I don't think Wireguard was ever going to be like those other things. It sounds like you should wait for products to be built on top of Wireguard. Judging a fish's ability to climb a tree and all that.



Very much agreed. While for simple use cases static keys are more than fine, a proper PKI is basically the only sane way to deploy a VPN in an enterprise or even small company setting.


What do you mean by static keys? All keys are changeable at runtime and wireguard uses public key cryptography.


That all clients have to be configured on each server.


Server has to know what clients to accept in any VPN solution. I still don't see the point.


> Server has to know what clients to accept in any VPN solution. I still don't see the point.

In OpenVPN and others, the server can just check the certificate presented by a client against a shared CA. The certificate can be signed/emitted by a totally different system.


Wireguard doesn't respond at all (not SYN/ACK stuff) if not provided with a correctly signed packet. This means you can't scan for wireguard ports without already being configured for them.


I see. Though if you ever want to remove access you'll still not avoid distributing information about individual clients to the VPN servers. It will just be a blacklist style list and not a whitelist like in WG.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: