VPN originally meant something quite different from the commercial consumer VPN product that mullvad represents, and was more like the encrypted overlay network provided by Tailscale. These are coming together again in this revolution of the wheel of reinvention. Not using "reinvention" in any negative way here, this is is good, I think.
For history and how some people (John Gilmore[1]) thought uniquitous interoperable VPN tech (using the IETF standardized IPSec) be used to end-to-end secure internet traffic generally, see eg this FreeS/WAN rationale from the 90s: http://web.archive.org/web/20210125023625/https://www.freesw...
Then in between then and now were the VPN dark ages where it was mostly only used as a tech to accesss old timey corporate "internal networks".
Don't forget that historically, a "half-measure" a lot of people used to use to get around regional blocking was "web proxies" like those linked to by proxy.org. I used to operate one as a young teen and I will say they are a security nightmare -- nothing stopping a web proxy operator from sniffing all user credentials passing through them, and modifying PHPRoxy to do this is trivial.
Personally I used to run a domain parking service (back when I was a teen in the early 00s) that used the domains as web proxies and replaced all adsense blocks it could find in the content with my adsense code, and did a 50/50 split between my code and the domain owner's code. Google eventually became wise to this and banned that sort of thing but it was pretty cool while it lasted, and honestly I think it was super fair considering we didn't even add any ad blocks just re-used the existing ones already in the content.
With practically-ubiquitous HTTPS, these days proxy use is mainly a privacy risk since for HTTPS, they usually can only support transparent byte relaying anyway.
> for HTTPS, they usually can only support transparent byte relaying anyway.
On my LAN I run Squid on a Raspberry Pi, and have my personal laptop configured to use that as a HTTP and HTTPS proxy.
All TLS HTTP connections going through the Squid proxy are intercepted.
This only requires that my laptop trusts a self-signed TLS certificate that Squid uses.
Someone could easily run the same kind of thing on the internet, providing free proxy service and telling their users to trust a certificate signed by them, without properly explaining the consequences of that. And a lot of novice users would likely use that proxy service. Gleefully unaware that even the “encrypted” traffic is completely visible to the proxy.
In fact, I would be extremely surprised if there aren’t a whole gazillion of services out there doing exactly that.
But in many jurisdictions running a service like that would likely be cybercrime. And even if it wasn’t illegal, it’s still not nice. So, you know, don’t go and actually create a service like that.
Not really. I do the same thing, but I do not use squid. Learning how to operate a localhost proxy is not particularly difficult compared to, say, learning programming languages. The later is a topic people on HN discuss ad nauseum. No one questions when someone lists the computer languages they know and claims they can learn a new language in X minutes or a weekend or whatever.
Just because someone does not know how to do something does not mean it is difficult. It just means they did not try to learn how to do it. This is very common comment on HN. It's quite silly.
Learning how to set up a localhost proxy on a laptop is far easier than learning a programming language. But it is not something that many people on HN want to learn, cf., e.g., programming languages.
>Just because someone does not know how to do something does not mean it is difficult. It just means they did not try to learn how to do it. This is very common comment on HN. It's quite silly.
Honestly, whats even more common and more silly are these kinds of comments:
"blah blah blah its easy, i did it blah blah i don't understand the problem"
Ever consider that other people are somehow different than you? Have different strengths, weaknesses and abilities? Have different needs from software? It's like, why do we even make software, you could just learn binary duh.
Every user is different. But software developers commenting on HN like to assume one size fits all. Perhaps this makes sense if they are getting paid from advertising. If every user is doing something different instead of all looking at the same website, using the same app, watching the same video, repeating the same meme, using the same few browsers on the same few operating systems, etc., then advertisers are less interested in throwing money away on "advertising services" from so-called "tech" companies.
I'm not talking about how difficult it is to set up a proxy. I meant that getting someone else's computer to accept a rogue root CA is a big deal, so saying an attack "only" needs that to happen is misleading.
> getting someone else's computer to accept a rogue root CA is a big deal
IMO not necessarily. See this part of what I said:
> telling their users to trust a certificate signed by them, without properly explaining the consequences of that. And a lot of novice users would likely use that proxy service. Gleefully unaware that even the “encrypted” traffic is completely visible to the proxy.
But in addition to that, note that where I was using the word “only” was specifically in the part of my comment where I was talking about how I set up Squid for myself using my own Raspberry Pi and my own personal computer.
Yeah, I've thought about having a CA for my home LAN services, and then have my phone and laptop trust that CA, but I'm terrified of the possibility that my CA could be compromised, and then someone could intercept my traffic to my bank or whatever.
So I just put up with clicking through the TLS cert errors every now and then.
That's a neat idea! I looked into name constraints many years ago, and at the time, no common browser or TLS library supported it; glad to see that that has changed.
With ubiquitous support, I hope that one day we'll be able to routinely get "subdomain CA certificates" issued by something like Letsencrypt, just like it's already possible to get wildcard certificates.
Parent commenter is talking about having a sub CA that is restricted to issuing certs for a specific domain.
For example let’s say that I am hosting a website at somewhere.example.com
Today I would be able to get a Let’s Encrypt TLS cert for somewhere.example.com and if I control the DNS for somewhere.example.com I can get a wild card cert for *.somewhere.example.com
But from what parent is saying, with name constraints it would be possible for Let’s Encrypt to give me a cert that would allow me to act as CA for anything under my somewhere.example.com
Meaning that I could for example issue a TLS cert for treehouse.internal.somewhere.example.com using the restricted CA certificate that was given to me.
A DIY CA is pretty easy to airgap: keep it on hardware that isn't your daily driver and only has a minimal/secure OS with no network connectivity. Anything you have lying around can do it: like an outdated laptop or SBC.
Even just using a VM for the CA would likely be sufficient. Only fire it up for signing, then keep its storage encrypted. I do this on my Proxmox server.
This, to me, is worth it for local stuff. The trusted self CA certs are better than blindly trusting an invalid cert, and some browsers require trusted certs to autofills passwords.
I used to do the same, but these days, getting TLS certificates for local services is actually not that hard anymore.
If you have local DNS, you can e.g. request a wildcard subdomain Letsencrypt certificate and then distribute the corresponding key and certificate to your LAN hosts.
>Someone could easily run the same kind of thing on the internet, providing free proxy service and telling their users to trust a certificate signed by them, without properly explaining the consequences of that.
Somebody already did do this, except as a paid service, and had their special 'client' simulate user clicks to install the self-signed root CA cert in your OS' cert store for you.
Interesting, it would have to be a pretty invasive client to do that. Usually installing a cert is accompanied by a lot of very loud warnings on modern OSes. So the end user would have to first give this software the permission to click around on their desktop for them without fully understanding the implications. Which does seem plausible
Adding trusted certificates in Firefox directly, instead of at the OS level, is very straightforward. Requires few clicks and does not shout too much.
I prefer using Firefox on my laptop so I didn’t check to see what the process is like for Chrome-based browsers to add trusted certificates (or if Chrome-based browsers only use OS-level certs).
But at least with Firefox, the user doesn’t have to go fiddling with OS level stuff.
no. you put it public, get public domain > valid cert from a trusted list of CA that google and mozzila treat as trustworthy, look et em. there are more problematic then unproblemtic
web proxies completely bypass any protection offered by HTTPS as they act as a true man-in-the-middle and place requests on behalf of the user. Unlike traditional proxies, web proxies are entirely web based and use a web interface so literally all the data flows through the server side code of the web proxy.
Not really. In my view, VPNs (at least the type discussed here) and proxies are complementary:
VPNs are good at encrypting/redirecting all of your device's traffic, since they're per-computer by default. They're accordingly good at preventing metadata leaks (e.g. visited sites or used apps) on untrusted networks.
Proxies are opt-in, but can accordingly be much more fine-grained. For example, Firefox supports per-domain (via various extensions) or per-tab (via the built-in "containers" feature) proxies – VPNs usually can't do that.
I am not 100% sure but Firefox VPN is an actual VPN based on mullvad. On the main product page[1], it says it is built with Wireguard which is a VPN software.
VPNs can, if they can be routed into via SOCKS or Http Connect gateways, for example. Generally, VPNs (L2/L3) can stoop to the level of proxies (L4) but not vice versa (at least not as cleanly).
Sure, you can bridge in either direction (using e.g. this [1] excellent Wireguard-to-SOCKS adapter), but in my view, if you have bytestream semantics, you're often better off using a bytestream-oriented proxying protocol (like SOCKS, SSH or HTTP) and vice versa.
These bridges/adapters do have their applications though – I have a home router that supports Wireguard natively, but not any of the higher-level protocols; this lets me use my per-tab approach with it.
I don't really get the value proposition of wireproxy. Especially since it seems not to be complete yet.
It is trivial to run a socks proxy on one of the peers and have your browser point to that. Both chrome and firefox can do this on demand and for the sites you select.
I have a docker based proxy running on a vm. (I've tried a bunch of them. They all work fine. None of them are hugely better at the bandwidth levels available to me - around 50-70mbps) The proxy is only listening on the wireguard IP. I have my clients connect to that wireguard peer and use the wireguard IP as the proxy. You can't install a proxy on the remote side? It should be possible seeing that you have to install something anyway. I am not sure about not needing root but it should not be a requirement for a proxy server since all it does is make http requests on your behalf.
Yes, "work" VPNs, site-to-site and many other topologies don't change the default route, but "privacy" VPNs like Mullvad usually do – there is no group of hosts to route traffic for other than simply "the entire internet".
That said, I'm aware of at least one that tries to support an "exempt/excluded hosts" feature, but it does this via some hack using its local DNS resolver and modifying the routing table on the fly, which does not work reliably.
Interesting! Is that actually the letter of the specification, or a common/industry-standard interpretation? I hate VPN setups like that; it often makes videoconferencing, browsing of non-corp sites etc. unnecessarily slow.
It's the letter of the specification, unfortunately:
> 3.13.7 Prevent remote devices from simultaneously establishing non-remote connections with organizational systems and communicating via some other connection to resources in external networks (i.e., split tunneling).
> DISCUSSION
> Split tunneling might be desirable by remote users to communicate with local system resources such as printers or file servers. However, split tunneling allows unauthorized external connections, making the system more vulnerable to attack and to exfiltration of organizational information. This requirement is implemented in remote devices (e.g., notebook computers, smart phones, and tablets) through configuration settings to disable split tunneling in those devices, and by preventing configuration settings from being readily configurable by users. This requirement is implemented in the system by the detection of split tunneling (or of configuration settings that allow split tunneling) in the remote device, and by prohibiting the connection if the remote device is using split tunneling.
And yep, it does indeed cause all of the problems you describe.
no, that's why you tunnel through seven proxies, each being used with different sets of credentials/encryption keys, all disposable. The last tunnel is not the main data channel, but the channel you use to coordinate command and control, and then you use a botnet to distribute pieces of your real communications.
web proxies aren't traditional proxies. They have a web interface and issue requests on behalf of the user server side, so all of the user's data flows through the user interface and the server side in plain text (though protected by the HTTPS of the web proxy itself). This is fine if you 100% trust the web proxy, but a malicious web proxy operator could easily look at all your data.
I used to pay a small fee for a shell account by some UK provider so I could setup a SOCKS proxy over a SSH tunnel. I suppose they could have captured my egress traffic but I trusted them not to that. I was just using it to watch BBC iPlayer/Channel 4 from the US anyways. :)
The first VPNs I encountered were for bridging branch offices onto the corporate network.
It was only later when they made 'consumer' vpns where they became point-to-multipoint affairs, for bridging a single computer onto the network. I'm not really sure how that confusion happened. In that era they were glorified SSH tunnels.
Well they generally call the first type Site to Site VPN tunnels and the second client tunnels. Lots of different marketing from various companies makes it confusing since it's basically all the same oss under the hood.
What is with this tendency to want to gatekeep the term "VPN" away from consumer-oriented providers? The general term "VPN" means exactly the same thing now as it did 20 years ago.
Virtual means it doesn't correspond to a physical network interface. Private means it involves encryption, as opposed to a basic tunnel like ipip or 6in4. And they've always been network interfaces showing up on some node, regardless of whether that node might have been a vendor's proprietary black box.
Decades ago there were fewer uses/topologies, dedicated "routers" were more important, and people naively trusted infrastructure. Those are the differences that have evolved with time. Quick searches say OpenVPN was released in 2001, and tinc in 1998.
> Private means it involves encryption, as opposed to a basic tunnel like ipip or 6in4.
The common-sense meaning of "private network" was, and is, a network that is private. I had one with a bunch of my university friends - we ran our own network services that we wouldn't trust to the wider world, like we had back when we lived together and really did have our own private network.
A point-to-point line to the provider's router that then bridges you onto the public internet is a "private network" only in the most degenerate sense.
> A point-to-point line to the provider's router that then bridges you onto the public internet is a "private network" only in the most degenerate sense.
You can make an analogous argument about the traditional corporate site to site VPN, which is a point to point link between routers that bridges two non-virtual networks. By your standard, calling that a virtual network is only true in the degenerate sense.
I see your point about the possible meaning of "private", but I don't think that quibbling over the semantics is useful for much besides gatekeeping. There were plenty of corporate VPN links piping Internet-reachable IP addresses, just as there were plenty of VPN links with broken or nonexistent crypto.
> You can make an analogous argument about the traditional corporate site to site VPN, which is a point to point link between routers that bridges two non-virtual networks. By your standard, calling that a virtual network is only true in the degenerate sense.
Disagree. "The network", in the sense that my PC, and Bob's PC in the next town, and the server in our colo space, are all on "the network", is virtual, in a pretty essential sense. Even if 68 of the links in the network are physical wires and only 2 of them are virtual, their existence changes the character of the whole. In the same way that we have an "international network", that would be important to think of and treat as international, even though it only has one cross-border cable.
What is the point of just quibbling over definitions? If you were using this framing as in support of a larger idea, it would be plausible to entertain. But without that I don't really see much point, because it's just as easy to declare things the opposite of your assertion - eg the term "VPN" doesn't apply to an entire corporate network (as it would per your extension argument), rather just the virtual link part of it. Of course asserting this is similarly pointless without some larger point.
For history and how some people (John Gilmore[1]) thought uniquitous interoperable VPN tech (using the IETF standardized IPSec) be used to end-to-end secure internet traffic generally, see eg this FreeS/WAN rationale from the 90s: http://web.archive.org/web/20210125023625/https://www.freesw...
Then in between then and now were the VPN dark ages where it was mostly only used as a tech to accesss old timey corporate "internal networks".
[1] https://en.wikipedia.org/wiki/John_Gilmore_(activist)