and ssh will automatically connect through the gateway to the (normally inaccessible) internal node. ForwardAgent will pass the credentials through. (If you copy this blindly, note that this requires netcat). This configuration lets you pretend (to tools like scp, rsync-over-ssh, etc) that you have direct access to the machine in question, even when it goes through a gateway machine:
If your intent is to hide what you're doing from your LAN/ISP, there is a snag with Firefox. It will leak DNS requests locally unless you go to about:config and set "network.proxy.socks_remote_dns" to "true"
Yes, its a useful trick I often use. However, does anyone know if Chromium or Opera can do the same DNS request tunnelling? (Firefox+Linux is sluggish for me.)
I'm pretty sure Chrome, at least, doesn't support SOCKS yet at all.
I was trying to make a shell script just the other day that would set up a SSH SOCKS proxy to a remote box and launch a browser using it, but couldn't do it from the command line in either Firefox or Chrome (didn't try a custom profile). Furthermore, I couldn't get it working in Chrome at all (shame; Firefox is more sluggish, and Opera won't even start on my box).
One problem with Chrome is that its proxy config interface really is just interfacing with the GNOME desktop-wide proxy settings. So it's impossible to have Chrome use multiple proxies or to only have one instance of Chrome use a proxy, but another instance not -- even if both instances have completely separate user data directories -- and all proxy changes are instantaneous across all Chrome instances. (At least for Chromium 5.0.391.0 on Linux)
Chrome on OS X can use SOCKS, but not explicitly for Chrome, since the operating system manages the connections under System Preferences > Network > Advanced > Proxies.
Even better: Use SSLH[1], so you can still have an HTTPS server, but also connect to ssh on port 443. Set it up myself and my boss still doesn't know.. :)
i do it every day, works out easier for the IT dept then configing the network for ppl like me... but if you are not on good terms with the IT dept its a good way to lose your job, tread carefully.
As others have mentioned, it allows you to use a remote box as a SOCKS proxy, so your browsing traffic originates from there and is encrypted on the local network.
This is useful when the local network is untrusted, or to bypass things like url blocklists or IP filtering.
The arguments are as follows:
ssh -C2qTnN -D 8080 your-user@example.com
C2: request compression, at level 2 (1 is least)
q: quite mode
T: Disable pseudo-tty allocation. I don't know why you would want this. I suspect since you are only using it for a proxy you don't really need a tty, but seems a little unnecessary to me.
n: redirect stdin from /dev/null. Again, not sure why this is needed, but I suspect it is related to the "T" option.
N: Do not execute a remote command. You are only using port forwarding so no commands are needed.
-D 8080: the local port to forward
your-user@example.com: Username/host of remote machine.
This is a pretty optimized example. The simplest working version is just
ssh -D 8080 your-user@example.com. Personally, I'd use the -C2 argument as well, but leave the rest out.
According to my manpage, C is for compression and 2 is for SSH2. You'd need to use -o to set the compression level, and then you'd probably also want to make sure compression actually helps (manpage warns against it).
Similarly, -n is described as necessary for backgrounding the process, if you want to. I can't find a reason to use -T, unless you intend to send binary data over the pipe.
Flash works but it uses the default system connection and not your browsers proxy connection. So in cases of site that have IP based restriction, flash based sites will still see your original IP
and how do you know about this ?
I use the very same trick with youtube and many other flash based sites.
And flash ALWAYS use the browser's settings (at least in opera and firefox) both in Windows and Ubuntu
Basically, you are sending and receiving all of your web traffic through an encrypted connection to a remote computer.
A typical use for this is when you are connected to a foreign network like a hotel. If you don't route your http traffic over ssh, there is a risk of having your traffic sniffed and/or recorded.
One of the biggest 'ah-ha' moments I had with SSH was that I could create my own hosts with certain properties. For example, if I wanted a backup server with a special user and key I could add it to my ~/.ssh/config file
Host backup-server
HostName backup.example.com
User backup
IdentityFile ~/.ssh/backup_dsa
This is especially useful when dealing with directories with too many files in them - say around a million files. Regular globbing won't work, and most regular tools won't work - but tar works great.
Also - another trick is to give it -c blowfish to use a lighter/faster cipher for the transfer to save on CPU time.
Ssh already compresses, so you can drop the z. And generally cpio does a better job preserving the file system structure (hard links, time stamps). And it tends to be a bit more cross-platform, if you don't have GNU tar on both ends.
I usually do use scp, but it becomes problematic when there are symlinks hanging around. scp just follows them and ends up copying the same file many times.
This is especially bad if you have a directory structure that references a higher-level directory.
In my experience, using rsync to do just a raw copy is not faster than the ssh+tar approach. rsync wants to poke around and optimize, and that's incredibly awesome, except when there is absolutely no chance of optimizing at all, in which case it's just dead weight. I don't know of anything that can beat ssh+tar (possibly with the encryption tuned cheaper, as mentioned above); it especially blows raw scp out of the water for lots of small files.
Check 'man ssh' for escape characters, including for how to terminate an SSH session when the remote is not responding. Check 'man sshd' for the AUTHORIZED_KEYS FILE FORMAT section. I call particular attention to the "command='command'" option, which allows you to set up an SSH key in such a way that it can only be used to run a particular command. Of course the key is only as secure as the command, but that's a great start. I use it for when I want to cron a job where one server has to talk to another to do something as some privileged user, and of course can't enter a password then, but I don't want the key to grant full login privs as that user.
This one is simple, but I like it. Use -X command to launch a local x11 session for a given remote application. Works for Linux and OSX machines with x11 installed.
>ssh -X user@remoteserver.com will connect you.
>gedit file.txt
will launch a remote instance (viewable locally) of gedit with the remote file.txt loaded and ready for editing. Especially good for those who don't like command line editors (note: gedit must be installed on the remote machine for the example to work.)
This is a neat trick, and I've used it a lot -- but worth pointing out that this should really only be done over a local network. X11 apps tend to be VERY chatty, and very slow when forwarded over the internet. I'd recommend VNC or FreeNX to do remote X11 apps over the internet.
My favorite trick with SSH is running Tramp mode in Emacs to do editing on remote servers, invaluable when accessing isolated servers in data center via a jump-box server from my local machine. The fun part is the multi-host jump to editing files on a server that is couple hops away, like jumping through multiple data centers.
I usually bookmarks the Tramp sessions of the frequently visited servers to avoid retyping the host url and logon setting.
Got a computer behind a firewall whose configuration
you don’t have access to? It’s pretty easy to get the
computer behind the firewall to poke out to another
server.
(step 1, from the computer you wish to access)
derwiki@firewalledcomputer:~$ ssh -R localhost:2002:localhost:22 mypublicserver.com
(step 2, from any computer than can access
mypublicserver.com)
derwiki@mylaptopontheinternet:~$ ssh mypublicserver.com -p 2002
(authenticate)
derwiki@firewalledcomputer:~$
If you want to keep it running always, you may want to consider "autossh" (restarts ssh connections if they ever exit/disconnect)
It's important to encrypt your private key with a passphrase. Use ssh-agent to store the un-encrypted key in memory on login. On OSX 10.5 or greater, this is really easy: http://bit.ly/alDMhp. Make sure to add 'ForwardAgent Yes' to your ssh config, and then never have to type your ssh password again.
It is almost as bad to have your passphrase-protected key permanently stored in ssh-agent, because anyone with access to your machine can use the key without the passphrase. A better solution is to use ssh-agent with the -t option to establish a lifetime (after which you will need to re-enter the passphrase).
My setup is to keep ssh-agent running with a 2-hour lifespan, and connect to that automatically when I log in. Basically this:
I'm just gonna say this, NEVER have SSH keys without a password and say it's a secure way for a passwordless logon. It's unsecure and your keys can get used against you. Giving your key a strong password and using ssh-agent and ssh-add will give you passwordless logons and the benefit of security.
This could really use an explanation of what the arguments are doing. It's just a list of ingredients, not a recipe; very little can be learned from this list (though it may inspire learning).
But by the same token though, why document any code? It executes the same no matter what comments are around it, and all the underlying operations are well documented.
These aren't lists of ingredients, they are recipes. A typical problem with man pages is that they list the ingredients but don't give you any useful recipes. Some have an 'Examples' section but most don't. These are recipes: ingredients and parameters for them, and a short explanation of what you're going to get. Makes sense to me.