Hacker News new | past | comments | ask | show | jobs | submit login
SSH Kung Fu (tjll.net)
487 points by stasel on April 28, 2014 | hide | past | favorite | 130 comments



A trick I learned recently: create .ssh/config

File format: as many of the following blocks as you like

  Host $ALIAS <-- whatever you want here
  Hostname www.example.com
  User someuser
  Port 1234
You can now ssh to that server as that user by doing "ssh $ALIAS" on the command line, without needing to specify the port or user with the usual command line arguments, or necessarily spell out the entire host name.


What is more, you can specify an abstraction for the tedious double-ssh where you first connect to some internet-facing host in order to gain access to an internal machine:

    Host $ALIAS
        User $USER
        HostName $INTERNAL
        ProxyCommand ssh $USER2@$PUBLIC -W %h:%p
Now

    laptop> ssh jim@public.example.com
    public> ssh dev@myworkstation
becomes

    laptop> ssh work
(I just realized that this slightly confused article seems to accomplish the same by using a convoluted setup of port-forwardings and netcat.)


The ssh -W option -- which replaces netcat -- is relatively new. E.g. Redhat 5.x did not have it, nor did Ubuntu 10.04 LTS. Until OpenSSH 5.4 netcat was the way to do this sort of proxying.


I ran into an issue[1] with the combination of -W and control persist -- using openssh versions < 6.0.

netcat worked fine.

  [1]: https://news.ycombinator.com/item?id=4678117


And if your router keeps dropping idle connections, add something like:

    ServerAliveInterval 240
    ServerAliveCountMax 5


Can help on mobile (3G/4G) connections too.


> (I just realized that this slightly confused article seems to accomplish the same by using a convoluted setup of port-forwardings and netcat.)

Yeah, the article sets separately first

    Host bar
      ...
and then

    Host behind.bar
      ...
But it can also be done by just one step:

    host behindbar
      User         <user-behindbar>
      Hostname     behindbar.domain
      ProxyCommand ssh <user-bar>@bar.domain nc %h %p 2> /dev/null


I use autossh[1] to keep a tunnel open in the background to $PUBLIC which lets me connect faster to $INTERNAL.

[1] http://www.harding.motd.ca/autossh/


A favorite .ssh/config feature of mine is pattern matching on hostnames with "?" and "*". So you can say something like:

    Host bos-??
    HostName %h.mydomain.com
    IdentityFile ~/.ssh/my-boston-key

    Host nyc-??
    HostName %h.mydomain2.com
    IdentityFile ~/.ssh/my-nyc-key
and log in with e.g. "ssh bos-14".


Yeah, I use a similar thing for ec2:

    Host *.amazonaws.com
      User ec2-user
      IdentityFile ...
And then it is just

    ssh ec2-X-X-X-X.compute-1.amazonaws.com


This seems to be relatively new. It doesn't work on a couple of boxes I tried.

Thanks though, I didn't know about the ?? syntax.


I found references to it going back to 2008, and the git repo that has my dotfiles says I've been using it (in Linux) for 3 or so years. Maybe it depends on the OS/distro.

The patterns are similar to shell globs: * matches zero or more characters, ? matches exactly one.


This is excellent advice. The best part is that you can use the same $ALIAS for tools built on SSH, including scp and rsync, like this:

    scp $ALIAS:/var/log/mylogs/logfile ~/backups/logs/


The aliases also work when mounting filesystems over the network in the File Browser.

Nautilus / Connect to Server / Server Address: work/home/user

(here 'work' is the alias for the work computer)


And.... Ansible uses those aliases too, making it trivial to maintain *NIX server that are behind firewalls and/or vpns.


Well, that sure beats my .bashrc that's full of alias "servername"="ssh user@servername -p portnum"...


What's more, you can specify as many Host aliases (on one line) as you want (with wildcards):

    Host 192.168.* *.foo.*.com *.bar.net


I've been doing this for a while now, but my file is now huge and it's cumbersome to edit. Is there no utility to mange that file?


I've worked around this by creating a ~/.ssh/config.d directory and splitting my configuration out into multiple files (normally by project). I then use dotdee[1] to watch that directory and automatically rebuild ~/.ssh/config anytime there is a change.

[1] https://launchpad.net/dotdee


I don't know of any, but one thing you can do is split up your config into multiple files and then use `cat` to combine them after making a change. There's also https://github.com/markhellewell/sshconfigfs


Here's what I put in my .bashrc:

  alias compile-ssh-config='echo -n > ~/.ssh/config && cat ~/.ssh/*.config > ~/.ssh/config'
  alias ssh='compile-ssh-config && ssh'
Compiles all your ~/.ssh/*.config files into a single ssh config file. It's simple and stupid and seems to do the trick.


I think storm[0] is what you are looking for.

[0] https://github.com/emre/storm


Aside from the ones already mentioned, there's a library called dot-ssh-config[1] that is useful for generating SSH configs.

[1] https://github.com/aelse/dot-ssh-config


Sadly, this doesn't work always. Some apps which implement their own ssh don't support ~/.ssh/config. For example, the OSX Subversion GUI client Cornerstone doesn't support this.


I'm confused how this is better than just using an alias/profile?

Maybe it is just me, but I prefer to dump all my custom commands and aliases into .zshrc so they are easy to backup/track/find.


You have been down voted, but the question is reasonable.

One reason is so that invocations of ssh outside of the context of user invocation of ssh at the command line will also have these customizations included. This is especially important for ssh, which has emerged as a main security interoperability tool for Unix systems.

For example, if you use rsync, the tunneling and host alias conventions you set up in .ssh/ will carry over transparently to the ssh tunnel used by rsync.

Another example would be invocations of ssh in scripts (sh/bash scripts, even) that will not or might not read your .zshrc.


Ya, I think the underlying issue is I do things very differently than people on HN.

The idea of creating dependencies on a configuration profile inside a bash script is the exact opposite of what I would do.

I also could not rsync things to my local machine [bandwidth constraints] and would be rsyncing between remote machines, which being a shared environment, I would rely on explicit invocations instead of creating configurations/aliases.

Thank you for telling me how/why other people make different choices. I always do seem to have the blinders of my process is the only process I consider when commenting on HN. :)


To add to what @mturmon said; you'll want to keep in mind that even GUI tools (i.e. consider your favorite database tool that supports SSH connection) that support SSH connection will pick up this configuration so you don't have to manually plug in all of the pieces for each session.

Just specify the alias host name as configured in ~/.ssh/config and the user, identify file, and anything else you put there will be used as set.


This never occurred to me...I don't use GUI tools for database management outside of a local development environment or diagram generation.


A few commenters do not seem to be aware that it is perfectly possible to use passphrase-protected keys for automated tasks (cronjobs and the like).

The excellent (though unfortunately named) keychain[0] utility provides a ready and powerful abstraction for both ssh-agent and gpg-agent.

[0] https://github.com/funtoo/keychain



From the article:

    > No more password prompts
Is that - you ask - because he's using ssh-agent? No, it's because he doesn't tell you you should be using a password-protected key. Some kung fu.


There are many cases where you do not want, or cannot have a password protected ssh trust. For example, say you have a central nagios host monitoring a network, that nagios host needs to connect to remote machines to run interesting monitoring scripts (disk % full, raid controller query, mpio checks, etc), in these cases you do not want to have a password blocking the ssh trust. You will also find this type of thing happening in many continuous deployment workflows as bits are moving from one machine to the other. This is very common practice.


In that case, you would lock down the key so it could only be used to execute the strict subset of commands to do its job. It is very common practice, but is not mentioned at all in the blog post.


That does indeed sound like such a case. However, none of these cases appear to be what the article is addressing; it's just telling you to use non-password-protected keys, with absolutely no discussion of it.


These sound like exactly the cases where you shouldn't use ssh. Use nrpe instead, or run the check on the target machine from cron and make it report back. There's no reason to use ssh for it and at scale ssh adds considerable load to the monitoring host when starting the connection.

For continuous deployment you can't easily work around using ssh, but at least the access can be limited to specific commands only.


No, in that case, you limit the commands that nagios is allowed to executed using this specific, passphrase-less key.

In this case you would limit this ssh-key to only be able to execute the nagios monitoring scripts. Nothing else.

You do this in ~/.ssh/config on the remote machine.


I believe you mean ~/.ssh/authorized_keys?

For anyone interested here's an SO question with an example: http://stackoverflow.com/questions/402615/how-to-restrict-ss...


Both scenarios you mentioned would, I believe, benefit from using keychain (see below).

Let's suppose I have an account tests@host which runs the tests (scripts) that need to login to an array of machines.

In order for keychain to be helpful here, you need two prerequisites.

1) You need to be able to interactively login to tests@host once after bootup; after that you don't need to touch the machine again.

2) Then, the test scripts need to say

    . $HOME/.keychain/$HOSTNAME-sh
once before executing any ssh command (the line above simply imports the ssh-agent session variables into the current environment).

edit: I removed the Nagios references as other posters rightly point out that there are more endemic ways to collect information with Nagios.


The situation with beginner-friendly SSH tutorials is, in a much lesser degree perhaps, comparable to the crypto texts: Good will alone does more harm than good.

This treatment ssh does not mention ssh-agent and, more importantly perhaps, implies that there is a certain virtue in having private keys unprotected by sturdy passphrases lying around.

There is not; most emphatically not.


Hypothetically, if one was reading this article and had a large number of unprotected private keys around, one could change the password on these keys by issuing

   ssh-keygen -f id_rsa -p


The article mentions ECDSA, but doesn't mention Ed25519, which is supported since OpenSSH 6.5: https://lwn.net/Articles/583485/

As a bonus Ed25519 keys unconditionally use bcrypt for protecting the private key


This is why reading the man pages is useful; you'd get all this and more, including:

  - X11 Forwarding
  - Reverse forwarding (bind listening sockets on the remote machine,
                        redirecting to a local service)
  - SSH-Based VPNs


Best trick I learned in the past few years is SSH control sequences.

Disconnected from your host but not timed out yet? Press Enter, ~, . and the client will quit.


Ohh my, the day I learned this was one of the happiest in all my life. You can also configure this in your .ssh/config with EscapeChar, if you find yourself SSHing into other machines from a machine you're already SSHed into.


You can also use send a double tilde ( Enter, ~, ~) to pass a tilde to your inner ssh session. So killing a session within a session is Enter, ~, ~, .


> Sharing Connections

I've tried this before, and what effectively always happened (to me) is that as soon as I started copying a file, I couldn't continue working in Vim anymore until the file was done transmitting because the copying would eat all the bandwidth. There may be a flag or setting around this, but I've never found it. When I open two connections, it is usually fine.


Depends very much on the hosts and network; but yeah, I have two different aliases for one host in such cases: one for data-trafficking, another for latency-sensitive applications (e.g. interactive jobs). This way, there are two SSH connections up, and while they're still contending for the link, it's quite an usable setup (in other words, I'm letting the TCP/IP stack handle the contention, rather than the SSH multiplexer).


I've found it just tends to have the primary connection die, and never tries to reconnect. ServerKeepAlives or what have you don't seem to help either - the link goes dead, and I won't be able to connect any other sessions because SSH will just keep on routing them into the control master.


I have had this problem, although fairly rarely. I have the following in my ~/.ssh/config:

    ControlPath /tmp/ssh_mux_%h_%p_%r
This sets the path of the control file used to share the connection. If it ever hangs, I can just delete the file. But in practice I found this doesn't happen often and I appreciate the speed boost I get from connection sharing.


one minor annoyance is that there's a max limit to the ControlPath string (seemingly due to there being a max path length for Unix Sockets) which I've occasionally hit when connecting to hosts with very long hostnames (AWS default hostnames can sometimes hit it, IIRC).

Also note that the docs recommend against using publicly accessible dirs such as /tmp/ for storing your mux sockets. I'm not sure of the exact threat (maybe just info leakage about what hosts you're connected to, since the socket permissions themselves are strict), but I use ~/.ssh/mux/ for mine.


Good point about the permission. I'm not doing this on a shared host anyway, so no else has access to that directory, but good to keep in mind.


Using autossh for establishing the master connection helps immensely here - if it dies, it will automagically reconnect.


Another recommendation: start an SSH server on port 443 on a server somewhere. Then if you're stuck somewhere on an untrusted network, one that blocks most outgoing ports or one that throttles non-HTTP ports, you can use SSH for tunneling and/or setting up a quick SOCKS proxy to get yourself encrypted, unblocked, full speed internet access.


I just learned about remote file editing with vim and scp thanks to this article, it's the only thing I didn't know about and... wow, it's amazing. This will make my life much easier every time I have to remotely edit some config files on my servers.

As for the rest of the article, really nice stuff. Nice tricks for ssh newbies. I wish he also talked about setting up a nonce system with ssh or move sshd to a non-default port to prevent attackers spamming port 22, or even remove password authentication altogether.


Moving ssh port is, IMNSHO, a stopgap measure; you should have exhausted all the other options (e.g. no passwords, no root login, denyhosts/fail2ban etc.) before this even crosses your mind.

In other words, the inconvenience this brings is not adequate to the infinitesimal increase in security.


For me I just don't like seeing /var/log/auth.log being filled with 100s of lines of:

  Failed password for root2 from 82.192.86.44 port 44990 ssh2
  Failed password for admin from 82.192.86.44 port 44990 ssh2
  Failed password for sysdb from 82.192.86.44 port 44990 ssh2
  Failed password for scott from 82.192.86.44 port 44990 ssh2
(Yes, that IP has scanned my machine before)


And that's exactly what denyhosts is for. You'll see this line a few initial times, then the banhammer springs into action.

(It's fully configurable - the number of failed attempts, the length of the autoban, etc.)


Unfortunately one attempt is enough when there's a pre-auth vulnerability. Your ban-hammer doesn't help you there.


until one of the millions of other compromised IPs begins hammering your machine minutes later..


That's the whole point. The ban-hammer in this case is automatic and will ban that one too after five attempts or whatever.


You're missing the point. One attempt is enough when there's a pre-auth exploit.


it still doesn't prevent your logs getting filled up with crap is my point.


Preventing logs from filling up is quite a cosmetic issue. Making the box hard to crack is certainly more relevant.

Note that I'm not advocating against a port change; just saying that it's the very last of available options, as it's essentialy security-by-obscurity, and thus only gives you a feeling of higher security (due to less spam in the logs).


Making security logs usable can (note the word) be a very important part of a security setup. Lots of people don't have the bandwidth to pay attention to noisy log files to look for anomalies.


There is also ssh-faker: http://www.pkts.ca/ssh-faker.shtml

Which would prevent even that first password failure attempt from occurring.


Sending a password over telnet seems like a bad idea..


True, but still, moving the port away from the default is always a good and effortless thing to do. Or at least making people aware of it.


Good...perhaps, iff you're aware that this is a cosmetic issue (less spam in the logs), rather than actual security (and that ports 222, 2222 and 22222 get just as much spam as 22).

Effortless...except you need to configure every client to use the non-default port. How much effort is that? IDK, depends on your use case.

That said, I consider it harmless; which is to say, the benefits and drawbacks are just about equal, IMNSHO.


> Effortless...except you need to configure every client to use the non-default port

I've never seen this as extra effort given I'm already in the ~/.ssh/config file adding an "IdentityFile" line anyway? The only time you wouldn't is if you are using the same (default) private key for every configured connection. I will faithfully assume that no-one is advocating for that :)


In other words, the inconvenience this brings is not adequate to the infinitesimal increase in security.

You are wrong. Please refrain from giving security advice.

Changing or filtering the SSH port prevents your host from being compromised by automated netrange sweeps in the event of a pre-auth ssh vulnerability. For this reason changing the SSH port is considered best practice.


Since port numbers are a very tiny space, that would amount to an infinitesimal increase in security, right? Essentially, 'hiding' the port is 'security through obscurity' which is a thoroughly discredited idea.


This is assuming that someone is specifically targeting your machine. In which case yes, changing the port number probably won't do much. But if someone is just hammering random servers on port 22, changing the port number is much more likely to be effective.


You misunderstand.

Changing the port does nothing against targeted attacks and it's not about 'hiding' anything. The purpose is to take your host out of the scope of automatic scans which almost exclusively focus on the most common ports (22, 2222, 22222 ...).


Okay, a pre-auth vulnerability is a plausible option I didn't consider; you are right.


While we are busy dispensing wisdom: Do use

    PermitRootLogin without-password
instead of 'yes' in /etc/ssh/sshd_config if you absolutely must have ssh root access.


I'd say, maybe it's wiser to use

    PermitRootLogin forced-commands-only
If it's necessary to run something as root — declare it beforehead. If you encounter a situation when you need to run something unusual on an automated basis — login as administrator (or edit your Puppet/Chef/Ansible/alike rules if you're on the smart system management side) and update ~root/authorized_keys.

If one needs to SFTP as root, they could enlist `internal-sftp` target, too (although I haven't tested this, I don't SFTP as root and if I must update some files — I setfacl on them)


Why not use the more common "no"?


Because then root login would be disabled entirely. With "without-password" SSH-key based login is still possible (and no, that's not much of a security risk).


Is it really much harder to leak a private key than a passphrase? (It's obviously harder, but not sure whenever a difference is significant.)

While one can't peek from behind your shoulder, if they got a keylogger on your machine, they could steal ~/.ssh/id_* files as well (and sniff their security passphrases too).


Brute-forcing a key is pretty much impossible and people - despite all advice - still use short and insecure passwords. Certainly a machine that does not root login at all is better than a machine with key-based root login, but a machine with key-based root login is better than password based root login. The perfect is the enemy of the good here.


If you must allow SSH root access, in 2014, you are doing something horribly wrong, and this will come back to bite you.


Did you ever stop and think about this or are you just repeating something you read on "Hacker""news"?

Getting by /without/ direct SSH root access is often impractical (think about scp), and without-password is a secure way to have it.

Also, the more people know about "without-password", the less people will set PermitRootLogin to "yes".


Requiring admins to ssh to a different, unique-to-them, user, and use sudo from there for any operations requiring root is much better.

It's far easier to audit what's been done to the server, which is important not just for compliance but also for figuring out why something's broken suddenly.

It also means that you get to have your own shell history, your own shell settings, your own vim settings, etc, etc.

In general, having proper deployment, log collection and config management tools in place tends to mean you rarely need to scp files around at all - and the cases when you do, you can work around this by scping them to some other dir, and moving them locally with a sudo command.


...which is fine up until someone forgets to use visudo and buggers up the sudoers file so nobody can get back in to fix it.

A user login followed by su to root is a valid alternative, but I wouldn't have a problem with allowing key-only root access via sshd either.

You'd want the root key/password to be very tightly controlled for the reasons you mention, but having it set is (IMO) a worthwhile backup plan for when things go wrong.


Why do you need root access for scp? Just scp the file as a regular user and then use sudo to copy it into place.


tl;dr: "disallow root login entirely, everything else is bad" is cargo culting.

I said "impractical", not "impossible". Of course I can use sudo. But it's more work. I require root access a lot. It adds up quickly.[2]

And I hate typing passwords/passphrases. In fact, many of my passwords I can't remember. I've got an SSH agent for that, which reduces passphrase entry to yes/no (tab-space/space, actually).[1]

Also, I prefer my normal user account not to be a sudoer at all.

Besides, please consider that disallowing root access actually only gets you protection against root password guessing anyway. The "stolen key + passphrase" scenario in a sibling subthread is so absurd I felt the urge to bang my head against my desk. Sudo won't help you there either.

[1] Now please don't suggest "passwordless sudo".

[2] And there is another inelegance: /home is usually on a different partition than /, so your way will involve an additional copy. If /home is even large enough to fit that file.


[1] Why shouldn't I suggest it? Apparently it's obvious, so it would be nice to share. [2] I'm not sure where you get that /home and / are usually on different partitions. There's usually the same partition on machines I've administered. But if that is the case, you can find/create a suitable folder on the same partition (/var/tmp/ comes to mind)

I understand you didn't say impossible, but this doesn't really seem to be impractical to me at all.


@Passwordless sudo: Because then you have effectively made your user root, and compromising your user account is enough to get root access immediately. If you do that, then why have a seperate user at all?[3]

@Partitions: Seperating /home and / prevents normal users from filling up /. (And if you put both on LVM, you can grow them as needed.) Yes, I've only had this on some of the servers I've run.

@Impractical: it's one additional command for something I do quite often[4], and I still don't see the benefit (reminder: I fully agree with never using "PermitRootLogin yes").

[3] Granted, it does provide some context seperation in the sense that if you want to perform an administrative task, you have to explicitly use sudo. But it doesn't increase security, and it offers no advantage over "direct root access + normal user account".

[4] Not just scp, but also things like "less /var/log/messages" or "git clone root@host:/etc".

And again: what does "PermitRootLogin no" gain you over "without-password"? Why restrict it for no additional benefit?


I'm not really on one side of the argument or the other, but disabling root login means that an attacker doesn't automatically know the name of an account where login is permitted for one. Certainly not the best security mechanism, but if there happened to be some 0-day on the SSH server, you're much more likely to be safe from automated attacks.


Automated 0-day attack: fair point.

Though direct remote code execution is probably much, much more likely than authentication bypass.


Would you like to clarify how, from a security standpoint, the string "root" is worse than another user?

Allowing root login can be a user-management headache in multi-user environments, but strong SSH security can exist for root just the same as for any other user.


Because — LSMs aside — user commonly identified with a string "root" has unrestricted access to nearly anything and deserves additional security. This is also why NOPASSWD on sudo is not a good idea — even if your key leaked (bad things happen) and attacker got in, the system is hopefully still secure.

If you need to conveniently update some files on regular basis — chown or setfacl on them to your usual user or group. If you need to update root-owned file once in a blue moon — scp && ssh sudo mv it, it's not that hard, but better for security.

Oh, and obviously there are exceptions to the rule - say, if you're configuring some freshly-installed system and doing heavy config editing by hand (say, Puppet is not your thing or you just don't care about re-deployment), temporary lifting security barriers is perfectly fine in most cases.


Because a potential intruder is more probable to try "root" than "akerl_35" to get access. It's no big deal to login as any other user and then use sudo (if needed) or even su


If a potential intruder knowing what username to use helps them bruteforce your SSH, then the problem is with the entropy of your password, not your choice of username.


Yes, but it's easier to teach admins to never use "PermitRootLogin yes" "because it's bad for security" than to teach them to never use weak passwords.


You are wrong, again.

Passwordless root SSH is perfectly fine, which is why it is enabled by default. By people who have thought a little longer and harder about all this than you. (Sorry for the tone. Maladvice like yours on public forums is demonstrably harmful.)


No offense taken, I am all for a strong opinion (as opposed to a bland one trying not to offend anyone at all). I will re-examine my assumptions; any specifically relevant links you would recommend?


I just noticed FreeNAS defaults to `PermitRootLogin without-password` and doesn't have a GUI option to change it to `no`. I'm not sure if that's the default on FreeBSD generally, but it surprised me.


One problem I have with SSH is DPI. Deep Packet Inspection seems to be behind the SSH block in place at a local library I work at. SSH out in any form just isn't possible there, even via a browser-based console (such as that used by Digital Ocean, for example). There doesn't seem to be a suitable solution to get around it offered anywhere.

My own fix was to use 3G to do the SSH work via a tethered phone and to use the wifi adapter to run the bulk of any other web traffic. It'd be great to have a workaround for DPI, though, if anyone has any experience there.


I'd recommend complaining to the library trustees about it.

There's no reason they should be refusing your traffic, and they are probably only causing a problem because some overzealous consultant cranked up the setting too high.

In my city, the compliance requirement that must be met is to have a policy to address "obscene, indecent, violent, or otherwise inappropriate for viewing in the library environment". Blocking SSH access is not required meet that compliance requirement.

In our case, our library actually doesn't filter, it's left to the discretion of the librarians. And there is a time limit for access.


"Blocking SSH access is not required meet that compliance requirement."

Read up about SSH VPNs. Probably some kid set up a proxy accessed over SSH port forwarding, to access some pr0n site, got caught, and next thing you know, no SSH allowed anymore. If they were really smart they'd allow it but rate limit it to 2400 bit/s, which is fairly fast for console work but not so great for downloading animated pr0n gifs.

Whats weird is librarians typically are pretty hard core against censorship. The same place thats willing to go to court to keep "to kill a mockingbird" or "huckleberry finn" on the shelves, will simultaneously spare no expense to block adults from accessing a breast cancer awareness site. A strange bunch, librarians.


The library has an obligation to make a good faith effort to meet whatever compliance requirements that they are faced with. That's it. It isn't a bank or military installation.

If the original poster brings in an air card, and starts watching porn with the volume cranked up, the library doesn't have a right or obligation to jam the cellular network. They do whatever their policy calls for (usually ask the guy to leave).

Librarians are very rarely the problem -- the trustees or other governing body usually is. Make a fuss and in most cases the problems will go away. YMMV depending on the community, of course.


Cheers for your thoughts. I suspect the UK public library authority I would have to appeal to would have zero motivation to help with SSH access, unfortunately. I imagine it has been blocked for a specific reason, probably some previous or expected abuse, as you've both mentioned.

The SSH over SSL solutions others have pointed out may do the trick for now.


If the browser-based console is also blocked, there's something fishy going around, since that doesn't use SSH.

In any case, you can try proxying SSH over SSL using stunnel: http://askubuntu.com/questions/423727/ssh-tunneling-over-ssl

Or you could try setting up OpenVPN, it's easy enough.


I didn't want to spend too much time poking around, but it seemed odd to me too (wrt the browser-console). Cheers for the stunnel/OpenVPN thoughts, they ought to get through. It would be great if SSH could itself emulate SSL, in the modern context of increased security requirements and censorship.


SSH over SSL seems to be what you need. Try:

http://blog.chmd.fr/ssh-over-ssl-a-quick-and-minimal-config....


Does seem to do the trick, and I have half of that already set up - just need to work out the config for Nginx. It's a smart workaround indeed. Thanks for that.



Similar to my library. They have 100mb connection but I can't use ssh, git...


Try setting the sshd port to 443


I can recommend Mastering SSH - it's a nice, short read.

http://www.amazon.com/SSH-Mastery-OpenSSH-PuTTY-Tunnels-eboo...


I can't. I bought with some high expectations, unfortunately I knew almost everything in it from a couple of years of using ssh to my several VPSs. I was quite disappointed at the level of detail it had -- StackExchange level, I thought.


You're right that it doesn't go into detail but it's short and provides high-level overview of the most important features. It's much easier to learn the basic of something from a book and then find the details in the manual than working with the manual only.

But if someone expects to gain deep, expert level knowledge of how OpenSSH works then he'll be disappointed. What expert-level books can you recomment?


Unfortunately, I can't recommend anything, but I'm not a wide reader on the topic so my knowledge is limited.

Perhaps I was a bit hard on that book, my expectations were that it would be a deep, expert type of book. If that's not what you are looking for, it is perfectly fine. Actually, now that I think about it, the epub version is good to keep on your mobile phone as a handy, easily-accessible reference for use in the coffee shop, so I guess it does get a credit from me.


Some good tips here - I like Controlmaster/Controlpath.

Note that on the tip of ~/.ssh/known_hosts providing ssh auto completion, adding SSH server config to ~/.ssh/config will also enable auto completion.


It's great for tab-completion of remote paths for scp & friends.

It does have a few quirks though. One that I've noticed is (IIRC) that closing a shared session isn't sufficient to pick up new groups membership when you reconnect. You actually need to kill the master connection as well[0].

The syntax for shutting down a master connection is a bit clunky as well:

    ssh -O stop -S ~/.ssh/mux/socketname hostname
I've been meaning to make a little script or 2 that finds the current mux sockets and tests them with -O check and give you a list of simple IDs you can 'ssh-mux-kill $id' or something. In fact, it'd probably be a nice use for percol[1]

[0] There might be other ways of refreshing group memberships, but I don't know of any.

[1] https://github.com/mooz/percol


About the "Lightweight Proxy" (ssh -D), if you want it to be transparent to the application (not require SOCKS support), you can use my tun2socks[1] program. This is useful if you can't or don't want to set up an SSH tunnel (which requires root permissions on the server). The linked page actually explains exactly this use case. It even works on Windows ;)

[1] https://code.google.com/p/badvpn/wiki/tun2socks


Or use tsocks. Proxies everything that uses tcp through a socks proxy (ssh -D) http://manpages.ubuntu.com/manpages/hardy/man1/tsocks.1.html


Not system-wide though, and possibly incompletely and with bugs. The entire socket API is far from being simple to wrap like this, especially when you consider that it includes all the various IO functions (read/write, send/recv, recvmsg/sendmsg), nonblocking operation with select, poll, epoll, the p* versions of these with special behavior with respect to signals, the integration of these polling functions with non-wrapped fds, various socket options, splice functions, thread safety, shutdown semantics...

It shouldn't be hard to find a program which runs fine with tun2socks but breaks completely or subtly with tsocks.


Another useful (albeit ugly) hack is accessing a NATed host which doesn't even have a port forwarded, via an intermediate SSH host outside the target network: http://superuser.com/questions/277218/ssh-access-to-office-h...

(disclaimer: tooting my own horn here, but it is a mighty useful trick)


I have a setup similar to this:

  laptop - user (userid: me)
  F - firewall (userid: me)
  A - machine 1 in colo (userid: colo)
  B - machine 2 in colo (userid: colo, machine I want to access)
  C - machine 2 in colo (userid: colo)
  .
  .
  100s of machines.
Trust (ssh password less login) is setup between me@laptop and me@F, and me@laptop and colo@A, and between all colo machine (A,B,C..). So colo@A can ssh colo@B w/o password.

I am able to log into colo@A via F w/o password as I copied the ssh key there manually. (path me@laptop -> colo@F -> colo@A)

QUESTION: Is it possible to ssh to other machines (B,C..) via A while assuming full identity of colo@A? (Path would be me@laptop -> colo@F -> colo@A -> colo@B/C/..) With my current config when I try to ssh to B it knows request is originating from 'laptop' and still asks me for password.


I think this guy answered it https://news.ycombinator.com/item?id=7658742

ProxyCommand ssh ...


While the socks proxy does not require any root (local or remote), it is only useful for programs that support it - which are not many.

However, apenwarr's sshuttle https://github.com/apenwarr/sshuttle is a briliant semi-proxy-semi-vpn solution that, in return for local root and remote python (but not remote root), gives you transparent VPN-style forwarding of TCP connections (and DNS requests if you want). It works ridiculously well. Try it, if you haven't yet.


Very interesting article;

I have a shell script that helps with setting up trusted keys: trusted keys help if you need to run automated tests, that involve several machines, or simply if you would like to skip typing in a password on each connection.

http://mosermichael.github.io/cstuff/all/projects/2011/07/14...


Is sshfs a serious replacement for nfs? I've got a Buffalo Nas at home that I use Samba for, but Samba is too slow to watch hi-def videos over. NFS seems to be a pain in the neck to get working on that particular device, and I hate using it on a laptop. I guess I should probably just try it, but I can't see SSHFS as being any faster than Samba.


It'll probably be less performant than NFS, but is a really great and simple way for mounting remote volumes securely across the internet without having to worry about VPNs or any extra authentication or anything.


Try:

# sshfs -o direct_io,nonempty,allow_other,cache=no,compression=no,workaround=rename,workaround=nodelaysrv user@remote:/place/ /mnt/somewhere

For even more performance:

* On server, start socat:

# socat TCP4-LISTEN:7001 EXEC:/usr/lib/sftp-server

* On client, do:

# sshfs -o directport=7001,direct_io,nonempty,allow_other,cache=no,compression=no,workaround=rename,workaround=nodelaysrv user@remote:/place/ /mnt/somewhere


Whoa, what do all those options do?


for me best trick with ssh so far is to use ssh as proxy command: ssh -o ProxyCommand="ssh -W %h:%p user@ssh_jump_host.somedomain.net -p some_non_22_port" user@some_host_inside.lan -D 1234

above creates dynamic tunel (for use as socks proxy) through jumphost to reach http hosts available only to some_host_inside.lan machine




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: