File format: as many of the following blocks as you like
Host $ALIAS <-- whatever you want here
Hostname www.example.com
User someuser
Port 1234
You can now ssh to that server as that user by doing "ssh $ALIAS" on the command line, without needing to specify the port or user with the usual command line arguments, or necessarily spell out the entire host name.
What is more, you can specify an abstraction
for the tedious double-ssh where you first connect
to some internet-facing host in order to gain access
to an internal machine:
The ssh -W option -- which replaces netcat -- is relatively new. E.g. Redhat 5.x did not have it, nor did Ubuntu 10.04 LTS. Until OpenSSH 5.4 netcat was the way to do this sort of proxying.
I found references to it going back to 2008, and the git repo that has my dotfiles says I've been using it (in Linux) for 3 or so years. Maybe it depends on the OS/distro.
The patterns are similar to shell globs: * matches zero or more characters, ? matches exactly one.
I've worked around this by creating a ~/.ssh/config.d directory and splitting my configuration out into multiple files (normally by project). I then use dotdee[1] to watch that directory and automatically rebuild ~/.ssh/config anytime there is a change.
I don't know of any, but one thing you can do is split up your config into multiple files and then use `cat` to combine them after making a change. There's also https://github.com/markhellewell/sshconfigfs
Sadly, this doesn't work always. Some apps which implement their own ssh don't support ~/.ssh/config. For example, the OSX Subversion GUI client Cornerstone doesn't support this.
You have been down voted, but the question is reasonable.
One reason is so that invocations of ssh outside of the context of user invocation of ssh at the command line will also have these customizations included. This is especially important for ssh, which has emerged as a main security interoperability tool for Unix systems.
For example, if you use rsync, the tunneling and host alias conventions you set up in .ssh/ will carry over transparently to the ssh tunnel used by rsync.
Another example would be invocations of ssh in scripts (sh/bash scripts, even) that will not or might not read your .zshrc.
Ya, I think the underlying issue is I do things very differently than people on HN.
The idea of creating dependencies on a configuration profile inside a bash script is the exact opposite of what I would do.
I also could not rsync things to my local machine [bandwidth constraints] and would be rsyncing between remote machines, which being a shared environment, I would rely on explicit invocations instead of creating configurations/aliases.
Thank you for telling me how/why other people make different choices. I always do seem to have the blinders of my process is the only process I consider when commenting on HN. :)
To add to what @mturmon said; you'll want to keep in mind that even GUI tools (i.e. consider your favorite database tool that supports SSH connection) that support SSH connection will pick up this configuration so you don't have to manually plug in all of the pieces for each session.
Just specify the alias host name as configured in ~/.ssh/config and the user, identify file, and anything else you put there will be used as set.
There are many cases where you do not want, or cannot have a password protected ssh trust. For example, say you have a central nagios host monitoring a network, that nagios host needs to connect to remote machines to run interesting monitoring scripts (disk % full, raid controller query, mpio checks, etc), in these cases you do not want to have a password blocking the ssh trust. You will also find this type of thing happening in many continuous deployment workflows as bits are moving from one machine to the other. This is very common practice.
In that case, you would lock down the key so it could only be used to execute the strict subset of commands to do its job. It is very common practice, but is not mentioned at all in the blog post.
That does indeed sound like such a case. However, none of these cases appear to be what the article is addressing; it's just telling you to use non-password-protected keys, with absolutely no discussion of it.
These sound like exactly the cases where you shouldn't use ssh. Use nrpe instead, or run the check on the target machine from cron and make it report back. There's no reason to use ssh for it and at scale ssh adds considerable load to the monitoring host when starting the connection.
For continuous deployment you can't easily work around using ssh, but at least the access can be limited to specific commands only.
The situation with beginner-friendly SSH tutorials
is, in a much lesser degree perhaps, comparable to
the crypto texts: Good will alone does more harm than
good.
This treatment ssh does not mention ssh-agent and,
more importantly perhaps, implies that there is a certain
virtue in having private keys unprotected by sturdy
passphrases lying around.
Hypothetically, if one was reading this article and had a large number of unprotected private keys around, one could change the password on these keys by issuing
Ohh my, the day I learned this was one of the happiest in all my life. You can also configure this in your .ssh/config with EscapeChar, if you find yourself SSHing into other machines from a machine you're already SSHed into.
I've tried this before, and what effectively always happened (to me) is that as soon as I started copying a file, I couldn't continue working in Vim anymore until the file was done transmitting because the copying would eat all the bandwidth. There may be a flag or setting around this, but I've never found it. When I open two connections, it is usually fine.
Depends very much on the hosts and network; but yeah, I have two different aliases for one host in such cases: one for data-trafficking, another for latency-sensitive applications (e.g. interactive jobs). This way, there are two SSH connections up, and while they're still contending for the link, it's quite an usable setup (in other words, I'm letting the TCP/IP stack handle the contention, rather than the SSH multiplexer).
I've found it just tends to have the primary connection die, and never tries to reconnect. ServerKeepAlives or what have you don't seem to help either - the link goes dead, and I won't be able to connect any other sessions because SSH will just keep on routing them into the control master.
I have had this problem, although fairly rarely. I have the following in my ~/.ssh/config:
ControlPath /tmp/ssh_mux_%h_%p_%r
This sets the path of the control file used to share the connection. If it ever hangs, I can just delete the file. But in practice I found this doesn't happen often and I appreciate the speed boost I get from connection sharing.
one minor annoyance is that there's a max limit to the ControlPath string (seemingly due to there being a max path length for Unix Sockets) which I've occasionally hit when connecting to hosts with very long hostnames (AWS default hostnames can sometimes hit it, IIRC).
Also note that the docs recommend against using publicly accessible dirs such as /tmp/ for storing your mux sockets. I'm not sure of the exact threat (maybe just info leakage about what hosts you're connected to, since the socket permissions themselves are strict), but I use ~/.ssh/mux/ for mine.
Another recommendation: start an SSH server on port 443 on a server somewhere. Then if you're stuck somewhere on an untrusted network, one that blocks most outgoing ports or one that throttles non-HTTP ports, you can use SSH for tunneling and/or setting up a quick SOCKS proxy to get yourself encrypted, unblocked, full speed internet access.
I just learned about remote file editing with vim and scp thanks to this article, it's the only thing I didn't know about and... wow, it's amazing. This will make my life much easier every time I have to remotely edit some config files on my servers.
As for the rest of the article, really nice stuff. Nice tricks for ssh newbies. I wish he also talked about setting up a nonce system with ssh or move sshd to a non-default port to prevent attackers spamming port 22, or even remove password authentication altogether.
Moving ssh port is, IMNSHO, a stopgap measure; you should have exhausted all the other options (e.g. no passwords, no root login, denyhosts/fail2ban etc.) before this even crosses your mind.
In other words, the inconvenience this brings is not adequate to the infinitesimal increase in security.
For me I just don't like seeing /var/log/auth.log being filled with 100s of lines of:
Failed password for root2 from 82.192.86.44 port 44990 ssh2
Failed password for admin from 82.192.86.44 port 44990 ssh2
Failed password for sysdb from 82.192.86.44 port 44990 ssh2
Failed password for scott from 82.192.86.44 port 44990 ssh2
Preventing logs from filling up is quite a cosmetic issue. Making the box hard to crack is certainly more relevant.
Note that I'm not advocating against a port change; just saying that it's the very last of available options, as it's essentialy security-by-obscurity, and thus only gives you a feeling of higher security (due to less spam in the logs).
Making security logs usable can (note the word) be a very important part of a security setup. Lots of people don't have the bandwidth to pay attention to noisy log files to look for anomalies.
Good...perhaps, iff you're aware that this is a cosmetic issue (less spam in the logs), rather than actual security (and that ports 222, 2222 and 22222 get just as much spam as 22).
Effortless...except you need to configure every client to use the non-default port. How much effort is that? IDK, depends on your use case.
That said, I consider it harmless; which is to say, the benefits and drawbacks are just about equal, IMNSHO.
> Effortless...except you need to configure every client to use the non-default port
I've never seen this as extra effort given I'm already in the ~/.ssh/config file adding an "IdentityFile" line anyway? The only time you wouldn't is if you are using the same (default) private key for every configured connection. I will faithfully assume that no-one is advocating for that :)
In other words, the inconvenience this brings is not adequate to the infinitesimal increase in security.
You are wrong. Please refrain from giving security advice.
Changing or filtering the SSH port prevents your host from being compromised by automated netrange sweeps in the event of a pre-auth ssh vulnerability. For this reason changing the SSH port is considered best practice.
Since port numbers are a very tiny space, that would amount to an infinitesimal increase in security, right? Essentially, 'hiding' the port is 'security through obscurity' which is a thoroughly discredited idea.
This is assuming that someone is specifically targeting your machine. In which case yes, changing the port number probably won't do much. But if someone is just hammering random servers on port 22, changing the port number is much more likely to be effective.
Changing the port does nothing against targeted attacks and it's not about 'hiding' anything. The purpose is to take your host out of the scope of automatic scans which almost exclusively focus on the most common ports (22, 2222, 22222 ...).
If it's necessary to run something as root — declare it beforehead. If you encounter a situation when you need to run something unusual on an automated basis — login as administrator (or edit your Puppet/Chef/Ansible/alike rules if you're on the smart system management side) and update ~root/authorized_keys.
If one needs to SFTP as root, they could enlist `internal-sftp` target, too (although I haven't tested this, I don't SFTP as root and if I must update some files — I setfacl on them)
Because then root login would be disabled entirely. With "without-password" SSH-key based login is still possible (and no, that's not much of a security risk).
Is it really much harder to leak a private key than a passphrase? (It's obviously harder, but not sure whenever a difference is significant.)
While one can't peek from behind your shoulder, if they got a keylogger on your machine, they could steal ~/.ssh/id_* files as well (and sniff their security passphrases too).
Brute-forcing a key is pretty much impossible and people - despite all advice - still use short and insecure passwords. Certainly a machine that does not root login at all is better than a machine with key-based root login, but a machine with key-based root login is better than password based root login. The perfect is the enemy of the good here.
Requiring admins to ssh to a different, unique-to-them, user, and use sudo from there for any operations requiring root is much better.
It's far easier to audit what's been done to the server, which is important not just for compliance but also for figuring out why something's broken suddenly.
It also means that you get to have your own shell history, your own shell settings, your own vim settings, etc, etc.
In general, having proper deployment, log collection and config management tools in place tends to mean you rarely need to scp files around at all - and the cases when you do, you can work around this by scping them to some other dir, and moving them locally with a sudo command.
...which is fine up until someone forgets to use visudo and buggers up the sudoers file so nobody can get back in to fix it.
A user login followed by su to root is a valid alternative, but I wouldn't have a problem with allowing key-only root access via sshd either.
You'd want the root key/password to be very tightly controlled for the reasons you mention, but having it set is (IMO) a worthwhile backup plan for when things go wrong.
tl;dr: "disallow root login entirely, everything else is bad" is cargo culting.
I said "impractical", not "impossible". Of course I can use sudo. But it's more work. I require root access a lot. It adds up quickly.[2]
And I hate typing passwords/passphrases. In fact, many of my passwords I can't remember. I've got an SSH agent for that, which reduces passphrase entry to yes/no (tab-space/space, actually).[1]
Also, I prefer my normal user account not to be a sudoer at all.
Besides, please consider that disallowing root access actually only gets you protection against root password guessing anyway. The "stolen key + passphrase" scenario in a sibling subthread is so absurd I felt the urge to bang my head against my desk. Sudo won't help you there either.
[1] Now please don't suggest "passwordless sudo".
[2] And there is another inelegance: /home is usually on a different partition than /, so your way will involve an additional copy. If /home is even large enough to fit that file.
[1] Why shouldn't I suggest it? Apparently it's obvious, so it would be nice to share.
[2] I'm not sure where you get that /home and / are usually on different partitions. There's usually the same partition on machines I've administered. But if that is the case, you can find/create a suitable folder on the same partition (/var/tmp/ comes to mind)
I understand you didn't say impossible, but this doesn't really seem to be impractical to me at all.
@Passwordless sudo: Because then you have effectively made your user root, and compromising your user account is enough to get root access immediately. If you do that, then why have a seperate user at all?[3]
@Partitions: Seperating /home and / prevents normal users from filling up /. (And if you put both on LVM, you can grow them as needed.) Yes, I've only had this on some of the servers I've run.
@Impractical: it's one additional command for something I do quite often[4], and I still don't see the benefit (reminder: I fully agree with never using "PermitRootLogin yes").
[3] Granted, it does provide some context seperation in the sense that if you want to perform an administrative task, you have to explicitly use sudo. But it doesn't increase security, and it offers no advantage over "direct root access + normal user account".
[4] Not just scp, but also things like "less /var/log/messages" or "git clone root@host:/etc".
And again: what does "PermitRootLogin no" gain you over "without-password"? Why restrict it for no additional benefit?
I'm not really on one side of the argument or the other, but disabling root login means that an attacker doesn't automatically know the name of an account where login is permitted for one. Certainly not the best security mechanism, but if there happened to be some 0-day on the SSH server, you're much more likely to be safe from automated attacks.
Would you like to clarify how, from a security standpoint, the string "root" is worse than another user?
Allowing root login can be a user-management headache in multi-user environments, but strong SSH security can exist for root just the same as for any other user.
Because — LSMs aside — user commonly identified with a string "root" has unrestricted access to nearly anything and deserves additional security. This is also why NOPASSWD on sudo is not a good idea — even if your key leaked (bad things happen) and attacker got in, the system is hopefully still secure.
If you need to conveniently update some files on regular basis — chown or setfacl on them to your usual user or group. If you need to update root-owned file once in a blue moon — scp && ssh sudo mv it, it's not that hard, but better for security.
Oh, and obviously there are exceptions to the rule - say, if you're configuring some freshly-installed system and doing heavy config editing by hand (say, Puppet is not your thing or you just don't care about re-deployment), temporary lifting security barriers is perfectly fine in most cases.
Because a potential intruder is more probable to try "root" than "akerl_35" to get access. It's no big deal to login as any other user and then use sudo (if needed) or even su
If a potential intruder knowing what username to use helps them bruteforce your SSH, then the problem is with the entropy of your password, not your choice of username.
Yes, but it's easier to teach admins to never use "PermitRootLogin yes" "because it's bad for security" than to teach them to never use weak passwords.
Passwordless root SSH is perfectly fine, which is why it is enabled by default. By people who have thought a little longer and harder about all this than you. (Sorry for the tone. Maladvice like yours on public forums is demonstrably harmful.)
No offense taken, I am all for a strong opinion (as opposed to a bland one trying not to offend anyone at all). I will re-examine my assumptions; any specifically relevant links you would recommend?
I just noticed FreeNAS defaults to `PermitRootLogin without-password` and doesn't have a GUI option to change it to `no`. I'm not sure if that's the default on FreeBSD generally, but it surprised me.
One problem I have with SSH is DPI. Deep Packet Inspection seems to be behind the SSH block in place at a local library I work at. SSH out in any form just isn't possible there, even via a browser-based console (such as that used by Digital Ocean, for example). There doesn't seem to be a suitable solution to get around it offered anywhere.
My own fix was to use 3G to do the SSH work via a tethered phone and to use the wifi adapter to run the bulk of any other web traffic. It'd be great to have a workaround for DPI, though, if anyone has any experience there.
I'd recommend complaining to the library trustees about it.
There's no reason they should be refusing your traffic, and they are probably only causing a problem because some overzealous consultant cranked up the setting too high.
In my city, the compliance requirement that must be met is to have a policy to address "obscene, indecent, violent, or otherwise inappropriate for viewing in the library environment". Blocking SSH access is not required meet that compliance requirement.
In our case, our library actually doesn't filter, it's left to the discretion of the librarians. And there is a time limit for access.
"Blocking SSH access is not required meet that compliance requirement."
Read up about SSH VPNs. Probably some kid set up a proxy accessed over SSH port forwarding, to access some pr0n site, got caught, and next thing you know, no SSH allowed anymore. If they were really smart they'd allow it but rate limit it to 2400 bit/s, which is fairly fast for console work but not so great for downloading animated pr0n gifs.
Whats weird is librarians typically are pretty hard core against censorship. The same place thats willing to go to court to keep "to kill a mockingbird" or "huckleberry finn" on the shelves, will simultaneously spare no expense to block adults from accessing a breast cancer awareness site. A strange bunch, librarians.
The library has an obligation to make a good faith effort to meet whatever compliance requirements that they are faced with. That's it. It isn't a bank or military installation.
If the original poster brings in an air card, and starts watching porn with the volume cranked up, the library doesn't have a right or obligation to jam the cellular network. They do whatever their policy calls for (usually ask the guy to leave).
Librarians are very rarely the problem -- the trustees or other governing body usually is. Make a fuss and in most cases the problems will go away. YMMV depending on the community, of course.
Cheers for your thoughts. I suspect the UK public library authority I would have to appeal to would have zero motivation to help with SSH access, unfortunately. I imagine it has been blocked for a specific reason, probably some previous or expected abuse, as you've both mentioned.
The SSH over SSL solutions others have pointed out may do the trick for now.
I didn't want to spend too much time poking around, but it seemed odd to me too (wrt the browser-console). Cheers for the stunnel/OpenVPN thoughts, they ought to get through. It would be great if SSH could itself emulate SSL, in the modern context of increased security requirements and censorship.
Does seem to do the trick, and I have half of that already set up - just need to work out the config for Nginx. It's a smart workaround indeed. Thanks for that.
I can't. I bought with some high expectations, unfortunately I knew almost everything in it from a couple of years of using ssh to my several VPSs. I was quite disappointed at the level of detail it had -- StackExchange level, I thought.
You're right that it doesn't go into detail but it's short and provides high-level overview of the most important features. It's much easier to learn the basic of something from a book and then find the details in the manual than working with the manual only.
But if someone expects to gain deep, expert level knowledge of how OpenSSH works then he'll be disappointed. What expert-level books can you recomment?
Unfortunately, I can't recommend anything, but I'm not a wide reader on the topic so my knowledge is limited.
Perhaps I was a bit hard on that book, my expectations were that it would be a deep, expert type of book. If that's not what you are looking for, it is perfectly fine. Actually, now that I think about it, the epub version is good to keep on your mobile phone as a handy, easily-accessible reference for use in the coffee shop, so I guess it does get a credit from me.
It's great for tab-completion of remote paths for scp & friends.
It does have a few quirks though. One that I've noticed is (IIRC) that closing a shared session isn't sufficient to pick up new groups membership when you reconnect. You actually need to kill the master connection as well[0].
The syntax for shutting down a master connection is a bit clunky as well:
ssh -O stop -S ~/.ssh/mux/socketname hostname
I've been meaning to make a little script or 2 that finds the current mux sockets and tests them with -O check and give you a list of simple IDs you can 'ssh-mux-kill $id' or something. In fact, it'd probably be a nice use for percol[1]
[0] There might be other ways of refreshing group memberships, but I don't know of any.
About the "Lightweight Proxy" (ssh -D), if you want it to be transparent to the application (not require SOCKS support), you can use my tun2socks[1] program. This is useful if you can't or don't want to set up an SSH tunnel (which requires root permissions on the server). The linked page actually explains exactly this use case. It even works on Windows ;)
Not system-wide though, and possibly incompletely and with bugs. The entire socket API is far from being simple to wrap like this, especially when you consider that it includes all the various IO functions (read/write, send/recv, recvmsg/sendmsg), nonblocking operation with select, poll, epoll, the p* versions of these with special behavior with respect to signals, the integration of these polling functions with non-wrapped fds, various socket options, splice functions, thread safety, shutdown semantics...
It shouldn't be hard to find a program which runs fine with tun2socks but breaks completely or subtly with tsocks.
laptop - user (userid: me)
F - firewall (userid: me)
A - machine 1 in colo (userid: colo)
B - machine 2 in colo (userid: colo, machine I want to access)
C - machine 2 in colo (userid: colo)
.
.
100s of machines.
Trust (ssh password less login) is setup between me@laptop and me@F, and me@laptop and colo@A, and between all colo machine (A,B,C..). So colo@A can ssh colo@B w/o password.
I am able to log into colo@A via F w/o password as I copied the ssh key there manually. (path me@laptop -> colo@F -> colo@A)
QUESTION: Is it possible to ssh to other machines (B,C..) via A while assuming full identity of colo@A? (Path would be me@laptop -> colo@F -> colo@A -> colo@B/C/..) With my current config when I try to ssh to B it knows request is originating from 'laptop' and still asks me for password.
While the socks proxy does not require any root (local or remote), it is only useful for programs that support it - which are not many.
However, apenwarr's sshuttle https://github.com/apenwarr/sshuttle is a briliant semi-proxy-semi-vpn solution that, in return for local root and remote python (but not remote root), gives you transparent VPN-style forwarding of TCP connections (and DNS requests if you want). It works ridiculously well. Try it, if you haven't yet.
I have a shell script that helps with setting up trusted keys: trusted keys help if you need to run automated tests, that involve several machines, or simply if you would like to skip typing in a password on each connection.
Is sshfs a serious replacement for nfs? I've got a Buffalo Nas at home that I use Samba for, but Samba is too slow to watch hi-def videos over. NFS seems to be a pain in the neck to get working on that particular device, and I hate using it on a laptop. I guess I should probably just try it, but I can't see SSHFS as being any faster than Samba.
It'll probably be less performant than NFS, but is a really great and simple way for mounting remote volumes securely across the internet without having to worry about VPNs or any extra authentication or anything.
for me best trick with ssh so far is to use ssh as proxy command:
ssh -o ProxyCommand="ssh -W %h:%p user@ssh_jump_host.somedomain.net -p some_non_22_port" user@some_host_inside.lan -D 1234
above creates dynamic tunel (for use as socks proxy) through jumphost to reach http hosts available only to some_host_inside.lan machine
File format: as many of the following blocks as you like
You can now ssh to that server as that user by doing "ssh $ALIAS" on the command line, without needing to specify the port or user with the usual command line arguments, or necessarily spell out the entire host name.