I personally don't use it for security reasons, but to be able to listen to last.fm easily through a vps in a country, where that is for free. (just one example)
Security is never just a software you install, it also depends on you, knowing, what you do.
Sure there are other uses. I'm just making sure people know, because it's being advertised as a VPN alternative, and this problem wasn't obvious to me when I first looked at it.
Would it be too hard to add non TCP traffic tunneling? This already supports DNS traffic through the tunnel but I wonder it is feasible to tunnel all kinds of traffic (UDP and ICMP mostly) through it?
The SSH connection is inherently a reliable stream, so you need to be careful - it should work if you drop UDP packets if the ssh stream's send buffer is full, but there might be severe performance traps down that route. Maybe an expert can elaborate. The other issue is you'd need to do manual NAT-like connection tracking to match up sources and destinations. I could imagine many UDP-based protocols not taking kindly to that sort of treatment.
By the way, (open-)ssh itself supports a tun/tap VPN mode (-w I believe) that creates actual network interfaces on the two endpoints, and thus can transport any IP traffic. It needs to be explicitly enabled on the server, and needs kernel tun/tap support, which is usually missing on VPSes that don't let you run your own kernel (modules).
+1 for a reference to slirp, which let you turn a dial-in connection to a Unix terminal into a real internet connection back in the days when your University would give you a terminal connection but not a PPP connection.
I tried to setup an openVPN server, but after chasing a rabbit hole of instructions down the craziest URLs with obsolete or missing information I finally gave up.
In a normal tunnel setup, one tunnels at the IP layer, and dumps all IP packets into the tunnel. At the far end of the tunnel, packets are sent onwards based on the far end's routing table. Things like DNS "just work" because everything happens at a layer below TCP and UDP. In this system, he's making it work for each non-tcp using layer 4 protocol separately, leading to weirdness like rewriting /etc/hosts.
For tcp, he's nating all traffic locally to a local server, which then multiplexes all incoming traffic into the ssh connection. The remote side then unmultiplexes the data. I don't fully understand how this avoids tcp over tcp. Maybe I'm dumb. [edit: yeah, I'm dumb. the tcp connection is terminated at the local server, the contents are pumped over the ssh connection, and the remote side opens a new tcp connection]
I wrote this mostly because I read the readme and went "but how does it work?".
sshuttle does not (currently) attempt to do anything other than TCP and DNS. UDP ought to be possible (other than that UDP-over-TCP could have similar problems to TCP-over-TCP, ie. that packets are never lost and this could confuse a hypothetical UDP-based congestion control algorithm). ICMP is probably not possible unless we run the server side as root.
The "--auto-hosts" option you linked to is not the same as the "--dns" option. auto-hosts doesn't capture DNS at all; instead it just adds to your local /etc/hosts. I use this feature much more than --dns, actually, because unlike --dns, it works great if you have multiple tunnels to different offices open at once. (You get all the hostnames from all the offices.)
sshuttle was originally designed to VPN into an office and forward a couple of subnets over. In that case, you often don't want to use the remote DNS server, you just want to know the remote hostnames, because most of your DNS lookups have nothing to do with the remote server. Hence --auto-nets and --auto-hosts.
Nowadays it seems like sshuttle is mostly being used to counter things like Firesheep, which means you want to forward all your traffic to a remote server. In that case --dns - which forwards all your port 53 traffic over the VPN - is preferable.
We could actually do all the other UDP ports the same way we do --dns. We just don't, because I haven't needed it and nobody else has submitted a patch.
Very much different. He explains it in the article quite clearly. No need to manually configure clients to use the proxy, as it uses the client's firewall to forward connections.
"sshuttle assembles the TCP stream locally, multiplexes it statefully over an ssh session, and disassembles it back into packets at the other end. So it never ends up doing TCP-over-TCP. It's just data-over-TCP, which is safe."
I haven't bothered to look at how this is actually implemented, so I can't comment on how it actually works.
As far as I know it sets up firewall rules (ipfw/iptables) to redirect certain outgoing TCP connections to a local socket, where they are picked up by the sshuttle service and multiplexed over the ssh connection. at the other end, the stream is demultiplexed into connections again. Definitely no individual packets.
I've heared the claim that TCP over TCP doesn't work well countless times. I've been using OpenVPN in tcp mode for at least 5 years on a daily basis and never noticed a problem. I've even done SIP over a TCP OpenVPN configuration without a noticeable problem.
It makes sense to me that it should perform badly, and it probably does for uses cases with a lot of traffic, but for an average user on a laptop, a TCP based VPN is fine.
In my experience it depends heavily on what you are doing. When I'm just sending files from one place to another through my SSH VPN, it performs reasonably well. When you actually try to do something interactive, as in, interactive over the network, like a VNC or X connection or even just get intensive enough with a remote vi session, performance can and does randomly tank. sshuttle works far better for that use case. (I stopped using it because it didn't do DNS and I didn't have the time to add that but the commit log implies that has been fixed.)
What happens is you essentially get "infinite bufferbloat." When there's no packet loss, you end up absolutely filling the transmit buffers at the entry points to the tunnels. The result, as other people have mentioned, is extremely poor interactive performance when you're simultaneously transferring large files.
TCP-over-TCP not "broken" in the sense that the sessions will randomly drop or your kernel will crash or anything. It's just that doing it correctly is much better.
Mind you, given the prevalence of bufferbloat (mostly caused by misdesigned routers/DSL/cablemodems) nowadays, interactive performance already largely sucks when you're transferring large files. So you might not even notice a difference.
sshuttle assembles the TCP stream locally, multiplexes it statefully over an ssh session, and disassembles it back into packets at the other end. So it never ends up doing TCP-over-TCP. It's just data-over-TCP, which is safe.
Explanation - as usual, my company works on a shared LAN which goes through a single internet connection.
To test some applications, we have to be able to receive postbacks (on our developer machines) through 3'rd party services.
The best way we found was to have an OpenVPN server running somewhere. Each developer connects to the VPN server and receives a private IP-address. All postbacks go to the VPN server and are then routed through nginx to the correct developer machine (on the private IP address).
VPN is a pain to setup and configure - can something like this be used instead ? The question really is - how does nginx forward requests to the correct developer machine.
It's a generic "expose my local HTTP server to the Internet" tool, designed to be really convenient and easy to set up (assuming you already have a local HTTP server). If I understand you correctly, it may be exactly what you are looking for.
It's FOSS so if you don't want to use the service (my startup!) you can roll your own. :-) But if you're looking for convenience, the service is probably hard to beat. Come chat on #pagekite on Freenode if you've got any questions!
Cool! We're actually doing a pilot program right now to try and better understand the needs of web developers who use PageKite, so be in touch if you'd like to be part of that.
You can have Python running on the DD-WRT box of course, which will probably drastically slow down your router. You can also generate a public key for the router and append it to a remote host's .ssh/authorized_keys file, but for reason number 1 above, it's probably not a good idea. Now if you have a PC box running Vyatta or pfSense, then you might be able to pull this off with minimum impact.
A few years back, I wrote a multiplexer / tunneler similar to Sshuttle in Python for a "Go To My PC" style web service. We wanted to tunnel over ssh, but had problems automating the client key generation and first-time connection. So we went with SSL. That and the cross platform requirement (most clients were running Windows) really took the fun out of it.
Anyone know of a portable Chrome that I can SSH into home with? Preferably something I can leave on a shared computer, while only I have access to the stunnel. Also, it may need to be able to use port 80, I don't know the filter inside-out.
Would this circumvent the checks Hulu does for example by forcing Flash to always connect directly.
i.e using SOCKS Proxy will not work with Hulu but using full VPN will, so how will sshuttle rate ?
I'm a little confused. I'm outside USA and I want to browse some restricted content for users located outside of the U.S. Can I do this using Sshttle and a VPS?
Yes, but you need a server in the US, so you can ssh to it.
Just so you know, for anything that works over a socks proxy you can already do that easily with ssh (using the -D switch, check the manpage). You then need to configure your client app to use it (e.g. if doing ssh -D locally, in google's chrome, go to tools -> preferences -> under the hood -> proxy settings -> manual, then fill with localhost and the port number)
Please actually read the linked articles before replying, and signal some recognition of the fact you have done so in your reply. If you wish to argue that the arguments put forth in the sshuttle README are somehow invalid and that what you linked is still better, please do so, but at the moment it looks like you haven't even read them.
git clone git://github.com/apenwarr/sshuttle on your client machine. You'll need root or sudo access, and python needs to be installed.
./sshuttle -r username@sshserver 0.0.0.0/0 -vv
(You may be prompted for one or more passwords; first, the local password to become root using either sudo or su, and then the remote ssh password. Or you might have sudo and ssh set up to not require passwords, in which case you won't be prompted at all.)
That's it! Now your local machine can access the remote network as if you were right there. And if your "client" machine is a router, everyone on your local network can make connections to your remote network.
You don't need to install sshuttle on the remote server; the remote server just needs to have python available. sshuttle will automatically upload and run its source code to the remote python interpreter.
This creates a transparent proxy server on your local machine for all IP addresses that match 0.0.0.0/0. (You can use more specific IP addresses if you want; use any number of IP addresses or subnets to change which addresses get proxied. Using 0.0.0.0/0 proxies everything, which is interesting if you don't trust the people on your local network.)
Any TCP session you initiate to one of the proxied IP addresses will be captured by sshuttle and sent over an ssh session to the remote copy of sshuttle, which will then regenerate the connection on that end, and funnel the data back and forth through ssh.
Fun, right? A poor man's instant VPN, and you don't even have to have admin access on the server.
I tried sshuttle awhile ago and abandoned it because of this. The only thing worse than no security is a false sense of security.