It is actually pretty useful to require SSH as a two-factor authentication method for internal services, particularly extraordinarily sensitive internal services which you'll expose solely to technical employees.
I use it for exactly one purpose: authorizing the "ghosting" of a customer account. ("Log in as this user.") Putting that behind SSH means that anyone authorizing a ghosting has both a blessed SSH key and the password to it. Even rooting my session on our admin app, via reflected XSS or stealing my unlocked laptop out of my hands, doesn't get you that. (Our admin app, by design, is less-than-all-powerful. A malicious admin session could do a heck of a lot of damage to our business but, critically, would be unable to access personally identifying data about customers' clients.)
This is something I would wrap a simple CLI around, and then kick myself in the tukhus for having ever used the language's interactive client to make raw database queries and edits on the production server.
At my company, using an interactive shell on prod to invoke DB queries, even through the ORM, is basically considered a raw database query, and is strictly forbidden. It's untested code coming straight from your fallible fingers and manipulating the prod database. I don't see why this should be treated as any different than a raw db query. You can do some pretty powerfully destructive stuff with an ORM.
If there's anything destructive about that interactive session, it's likely to be the 'authorize_ghosting!' call. Somehow, I find it hard to believe sticking it in a one line script would make it less destructive, but let's see.
I'm going to go out on a limb and suggest that it shouldn't just be turned into a one-liner that shoves ARGV elements directly at the database, but should actually do some validation of input. Even your one-liner is safer than raw irb since you can't make a typo and call the wrong method, but if you add validation, then yes, I think it's much, much less destructive.
I suppose when the args are both allowed to be arbitrary strings, the validation is already done, and the escaping is done by AR, so you can safely pass them unmolested. It just triggers my safety reflex to send ARGV elements on to the next layer without doing something to them.
I want to thank the author for testing this out and doing an implementation. When Mozilla Persona came out I thought it would be cool to make an identity provider which support SSH based auth. I never had much time to work on it so seeing the various design problems was really great. I do think you can make some improvements when paired with a system such as persona but the fundamental challenge that SSH is not integrated into the web remains.
Still, as programmer, I would love it if I could auth to github et. al with SSH keys. I would actually feel more secure doing that than with the password. Keep the 2 factor token either way. I completely agree with the author's assessment that this type of system would not work for the average web user.
I definitely echo the sentiment of wanting to use these sorts of power-user-only tools. But I think Moxie's recent post, They Live[1], does an excellent job of explaining why we shouldn't accept that the tools are too hard for everyone else, then just hoard them for our own use anyway.
Adding on to the praise, I wanted to also comment -- thanks for doing an analysis of the results. Having implemented it was neat, but the honesty in revealing that what you've built is an interesting experiment, but not necessarily an improvement on the status quo, is commendable.
There is still lots to be learned from the work you did -- both the technical and the analytical. I think this is probably the most striking example I've seen recently of "negative results are results, too".
No need for custom url formats. Keep it really simple:
Use a <link rel="ssh"> and rely on browsers to prompt us to "log in with ssh".
IF there's no public key, the browser could offer to run ssh-keygen for the user and save the results in the user's keychain.
If we use switcher[1], we can even put ssh and https on the same hostname and on the same point. This would be a recommended configuration since it would get through most proxies.
The demo doesn't work for me - when I load the echo'd URL it shows up with "log out" on the left side but once the globe appears I get just "sign in with SSH to place a pin" again.
However, the correct way to do this is the way browsers link to external protocols already:
This is no different than a mailto: link, a skype: button, an ftp:// link, or any other link in a webpage that uses a protocol other than HTTP/HTTPS.
Frankly, I don't want my browser talking to SSH servers. Because that means it has access to my SSH keys. If you don't think its a problem for your browser (that thing that runs javascript from anywhere on the planet) to have access to your SSH keys, the point of my comment will no doubt be missed.
A browser should do one thing, and do it well: browse pages. Once you start adding SSH clients, RDP clients, etc, you are simply inviting security holes, and arguably missing the whole point of local apps.
That's not a secure option without DNSSEC. Given that we're unlikely to see significant DNSSEC adoption, serving the fingerprint over HTTPS (or another option altogether) would be preferable.
So use DNSSEC for SSHFP lookups. The Debian package for example (openssh-client) added support for DNSSEC in its lookups, almost 5 years ago (April 2010)
The parent's suggestion would work if accessing the server by IP address directly, rather than DNS lookup. Assuming that the integrity of the data has been verified by the transport, I don't see the downside to the server providing the fingerprint in the HTML.
The parent's "solution" assumes the browser is reading the markup and connecting directly to the SSH server.
As I've said, this is a fucking horrible idea from a security stand point.
This whole concept is bonkers given that client side certs already exist and already work, but if you have some reason to connect to SSH from a browser session (i.e. lets say you were providing a remote dev shell), a plain hyperlink that hands of the connection to the system's default "ssh" handler (i.e. a terminal app of some kind) is still the best solution here.
Let the SSH client worry about SSH keys. Let the browser worry about HTTP and HTML.
Sorry, I re-worded "solution" to the lighter "suggestion" after posting, and I agree that there is likely a more fool-proof architecture (e.g. one not vulnerable to XSS; even HTTP headers would be an improvement, I suppose).
The idea (as I understand) is that you trust the data that you've received from the server, and the server knows the public key of the sshd that it wants you to use for login, so it provides the fingerprint for that public key.
I agree that browsers shouldn't use SSH; it's overkill. But in the example (for humor), the browser does worry about SSH keys, because it is an SSH client. "Just use DNS with DNSSEC" won't work for direct IP addresses. Using client side certs is unnecessary, as the trusted web server is only telling you the conditions under which it is okay to authenticate. If you already trust the integrity of the data, it isn't more dangerous (from a security standpoint) than when you you get a page over HTTPS that tells you to POST to /login.
Besides the fact that we both think that the browser as an SSH client is silly, I don't think the suggestion of providing the fingerprint in DNS with DNSSEC is any more secure than a trusted web server providing it, but it would not work for direct IPs.
explain a real world scenario where you have a web server with a valid certificate but you don't have a DNS entry for the server?
For example, if the certificate is assigned to an IP address. Not extremely common, but some people use it. [1]
You're inventing ridiculous scenarios to justify a nonsense concept of integrating html, browsers and ssh.
I stated (twice) that I think having the browser act as an SSH client is a silly idea. Not sure how I'm interpreted otherwise.
Both of my posts only point out that your intended correction (to just use DNS) wouldn't work for all cases, while the original post would work fine for authentication as far as I can tell. And that there are no inherent security concerns using in-band fingerprints, as opposed to looking them up via DNS w/ DNSSEC, if you already trust the integrity of the server response.
You keep replying along the lines of "well it's a bad idea to do SSH in the browser anyways", and I've already agreed with you there, because you're correct.
> I stated (twice) that I think having the browser act as an SSH client is a silly idea. Not sure how I'm interpreted otherwise.
I'm purely talking about the concept of linking to SSH from a HTML page. The original concept of "http auth by anonymous SSH" is ridiculous, and I'm glad you agree with that.
So in terms of using a link to open an SSH session in general (lets assume a non-ridiculous use-case, like opening a session on a dynamically created remote environment).
I explained a solution that works right now, needs a couple of new DNS records and potentially a one-line change in the user's ssh config file, to provide automatic, safe acceptance of the fingerprint.
Your argument is that it won't work with an IP address, but that they might still have a TLS certificate for the server IP address, which implies that the person/organisation running the server, spent all their money on "ownership" of a public IP via RIPE NCC (apparently 1650 euro/year), and an expensive SSL cert ($350/year) tied to that IP, but that they can't afford a $10 domain name to make SSHFP over DNS (w/DNSSEC) work.
The reason they would spend the money on the ownership of the IP and cert isn't because they can't afford the domain name. It's generally done for "mission critical" reasons because it takes out a class of weaknesses. DNS hijacking, DNS servers failing, DNS blocking by governments, etc. It could even be for vanity reasons, they may self sign, or w/e. It doesn't really matter, because the chance that these servers are going to be serving HTML to your standard web browser is pretty slim.
But I still think it'd be weird for DNS to be used here.
Assuming that the browser has already been modified to correctly handle whatever the SSH links do (e.g. by launching another program that has the fingerprint added) and the SSH link meets some security checks (same common name, same port running both the httpd and sshd), it seems wrong to me to have different capabilities based on if the common name is an IP address or a FQDN. I see the DNS solution to be kind of a "if all you have is a hammer..." solution, rather than a tailored solution for the link handling.
Anyways, thanks for the follow up post, I think I see your POV and agree with you that the DNS method would require much less work in order to get it work, given the current implementation.
The DNS solution has the benefit of working regardless of how the session is initiated.
But only for domains. Whereas the browser handling could say that the trusted-web-server on (common name) told me that X is a valid fingerprint for the sshd running on this same (common name). That sshd that runs on that port is customized and might create, like you mentioned earlier, a dynamically-created remote environment for the user.
If there were many web-servers/dynamic-sshd instances on a single domain, all the fingerprints would need to be added to the DNS. Granted, a unique use-case.. but the flexibility of browser handling would be nice.
The (common name) matching would be necessary as a same-origin policy.
edit: Basically, since the browser can understand the rest of the HTML/HTTP on the trusted page along with the ssh://name?fingerprint link, I think it makes sense to leverage it when wanting to open a connection to the same name, rather requiring the name to be a domain name and using DNS to get those values. For ssh:// links that aren't the same (common name) you wouldn't be able to use fingerprints (similarly to how you wouldn't be able to use fingerprints to the DNS entry for them).
On second thought, the port wouldn't matter, just the common name, since you've got a cert for it.
I would argue that currently almost 100% of SSH sessions would be started in a manner that has nothing to do with HTML or a browser. In that scenario there is no alternative to SSHFP + DNSSEC, and it's available to use today.
Given that the original concept proposed for HTTP auth over SSH is accepted as ridiculous, the number of use-cases for opening a SSH session from a browser is still minimal, and even then, those clients can also get the benefits of SSHFP + DNSSEC the same as regular sessions.
> That sshd that runs on that port is customized and might create, like you mentioned earlier, a dynamically-created remote environment for the user.
That wasn't at all what I meant - I meant that you might offer a web UI to create a new remote environment for a person to use, and then provide a shortcut "login with SSH" button - just a regular sshd process on a regular *nix box. A more common option might be a button in a VPS provider's control panel, to quickly launch an SSH session to an instance (particularly if the hostnames are reasonably long and hard to type)
> Basically, since the browser can understand the rest of the HTML/HTTP on the trusted page along with the ssh://name?fingerprint link, I think it makes sense to leverage it when wanting to open a connection to the same name, rather requiring the name to be a domain name and using DNS to get those values.
That implies a heavy tie between the browser and SSH. Technically, if you wanted to do this, one could write a small script/app that gets registered as the default ssh:// handler, parses out the FP and adds it to the known_hosts file before calling the regular ssh. But for obvious reasons this is dangerous without knowing the source of the FP. Which brings us back to a browser having to understand and integrate with SSH.
Im sorry but I just don't see the problem you have with using SSHFP records for this, or the specific desire to integrate with a browser and HTML of all things.
At best, you'd still have to have your fingerprints in both places (DNS and markup) to support a) connections initiated manually and b) browser/ssh combinations that don't support manually specifying a fingerprint but do support SSHFP records (which currently, is all browsers and any reasonably recent version of OpenSSH)
Frankly a better endeavour than getting a browser to understand SSH, would be to a) get more people using VerifyHostKeyDNS and SSHFP records and b) get more people using DNSSEC. Those are actual, real world things that are simply lacking adoption/usage, but do very much work.
Call it an amendment. I am neutral on the whole "connect to SSH through an HTML client", but I'll tell you what: the fingerprint is a necessary connection parameter, as much as the hostname or port. Not an afterthought.
Solve it however you like.
$ ssh --fingerprint ab:cd:...
or
ssh://user@hostname?ab:cd...
or
FINGERPRINT_VAR=ab:cd:... ssh somehost
or
what
ever.
One is ugly, the other is uglier. Fine. At least they're actually secure.
Guess: how many people actually check the full fingerprint before accepting it? On a good day, I remember the first four letters (2 bytes). Whenever I tell people, it's blank looks all around. What's a fingerprint?
And these are people who use SSH.
UI is important. UI matters. Good UI helps. UI UI UI UI UI.
The fingerprint is a required parameter for connection! Not just the hostname, also the fingerprint.
Sure, let the client cache it and automatically allow leaving it out on subsequent connects. But don't allow initial, fingerprintless connections. Never.
Implemented properly, this addresses the DNSSEC alternative. Do you have DNSSEC installed, do you trust it? Okay, get the key from there. Don't have it? And it isn't specified in the connection parameters? Woops, no connect. Impossible. Why? Because without a pre-supplied fingerprint, no SSH client should ever connect.
We don't need to debate DNSSEC here, the clients will speak. I know I'd be including the fingerprint in the connection parameters directly, but do as you please.
Make a get-fingerprint-insecurely tool, that just connects to a host and prints you what it thinks is the key. This allows people to still make insecure connections, but it's explicit. Make MITM and insecurity the cumbersome and explicit way. Not default.
This is such a frustrating, last-mile, almost-right-thus-wrong issue.
You're right in theory. In practice, unfortunately, people don't check the fingerprints. And the sad thing? We could do something to fix that.
Join the Fingerprints are a required connection parameter-movement.
PS: I think we're on the same side here. I don't go to sleep dreaming of SSH fingerprints: I'm just against allowing MITM by design. Of course, certificates, or any other means of ensuring the connection is not MITM'ed is just as good. Fingerprints as required connection parameters are just the easiest way to get that done, right now, today, in our way of working. But once everyone uses certificates properly: fine, forget about them. This non-MITM design needs to become part of everyone's understanding of SSH.
I agree that blindly accepting fingerprints is a bad idea, but I still think this can be solved for the majority of use-cases, with largely existing options, and without forcing the UI to be worse for users (when setup correctly)
From my understanding (I haven't tried this in practice yet), setting both StrictHostKeyChecking and VerifyHostKeyDNS to 'yes' will give most of what you want - it won't prompt to accept random keys (that's the StrictHostKeyChecking=yes bit) but it will explicitly trust SSHFP records it finds in DNS (thats the VerifyHostKeyDNS=yes bit). Obviously, you need to make sure your SSH client is using a DNS resolver library that actually supports & checks DNSSEC secured records.
Obviously you could just enable StrictHostKeyChecking (without VerifyHostKeyDNS) and use a simple shell script wrapper for SSH to accept a FP and append it to the known_hosts file before calling true ssh.
'ssh-keyscan' is your 'get-fingerprint-insecurely' tool in a nutshell.
The only substantial advantage to this scheme is you get server id pinning for free (via your known_hosts file), but a combination of HTTP public key pinning[0] and client-side certificates will give you all the same advantages with far less user effort.
You get pinning... and absolutely no verification the first time you connect, so anyone who got lucky enough to MitM you the first time (or first time since you deleted your known_hosts file or reinstalled your operating system or switched computers) can log in as you, without having to compromise any PKI. Meh.
I've worked in support of computing clusters that are widely used by researchers. By far the most common problem people had with using these systems was grokking ssh keys. Masters and PhD Computer Science people are routinely baffled by them, provide their private keys instead of their public keys, or provide both, etc.
People in general just don't get public/private key pairs. Any solution that requires any awareness or handling of the keys by the user is a non-starter. Sorry to sound negative but after a couple of decades of observation I'm convinced this will never change.
"And to prove he sent the package, he locks it with his key and then you use your copy of his lock to open it so that you know it's from him!"
It's a useful analogy for one specific use of public/private keys. But it doesn't capture the full spirit, which may lead to more confusion than just using no analogy.
It's not true in general that public-key encryption and digital signatures are inverse operations. (RSA has that property, but there are plenty of other algorithms that don't work that way.)
If you're writing for a lay audience that doesn't care about things like modular arithmetic, there's no reason to conflate the two operations. Just say encryption is like keys and locks, and signing is like... well... a signature.
After reading 'Why King George III Can Encrypt'[1] I really started hoping that something came of it. The metaphors they used seemed much more straightforward.
> provide their private keys instead of their public keys
I see this quite frequently from developers, as well. Hands down one of the worst design decisions of OpenSSH was to make private keys tab complete before public keys.
It will never change as long as using public/private key pairs is something exotic most people rarely use. I was baffled too when I first started using ssh keys, but today it's second nature. I get how they work, at least on a theoretical level. And I think most of us here on HN are like that. This isn't about being smart or of a mathematical bent, it just comes down to practice.
It could work, but it's a bit hard to wrap your head around public/private keys. Cryptographic keys has to either be an everyday encounter or something we never do. I don't think such a weird thing can be anything in between.
Yup, it's not really specific to SSH. The same problems occur using SSL X509 client certificates. Non-technical users will get confused unless they have a smart card/token that secures their private key.
I've seen this too. I've literally had (otherwise very smart!) people email me their PPK files (Putty Private Keys)! (Userify does help a lot with this now, however.)
The difference with crtauth is that you don't need an SSH server running in tandem, everything is handled by the crtauth library. Our main use case for this is for bots that need to access secure services.
An SSH key is equivalent to a (self-signed) SSL client certificate. I would love to see an easy way to use your SSH key as a client certificate - either an external tool or the browser itself could support this.
I like alternative two, except that I'd have it be
ssh <token>@<host> auth
This feels more natural to me and makes it easier to support other commands in the future should one wish to do so.
Regarding the part where he said:
>Running a custom SSH server along side a web server is not convenient. There is no good equivalent to the HTTP Host header, so hosting multiple SSH servers on a single IP address doesn’t work well.
That's not a problem. Your server got the host header over HTTP. When you generate the token, just tie the token to that host name.
Good point about the <token>@<host> format, I like it much better too.
You're right about tying tokens to host names, that would work. But you'd have to justify whether the extra complexity of multiple web servers preregistering tokens with a SSH front-end was worth it. Another approach would be embedding a static host identifier in the username as well.
Although my knowledge of the SSH protocol isn't complete, a related issue appears to be that servers prove their identity before clients send their usernames. That means servers sharing the same IP/port would also have the same host fingerprint.
It could be acceptable if all the services were run by the same organisation, but on a platform like Heroku it would be more of a challenge.
It wouldn't be too hard to extend the SSH protocol to include something like the HTTP Host-request header. In the protocol exchange section of the SSH RFC ( https://tools.ietf.org/html/rfc4253#section-4.2 ) the format described is:
In the comment section of the message, you could add something like "X-Host: hostname.wherever.org" and a smart SSH server could proxy the connection to the correct host. This happens before the key exchange occurs, so you'd still get strong authentication from having the right key.
I checked through the OpenSSH and Paramiko code, and both essentially ignore the comment section of the version exchange - everything between the first space and the CR/LF at the end. They does hold on to it for part of the DH key exchange, but they never try to interpret the bytes, so a modified client could keep sending the X-Host extension and stock OpenSSH would just ignore it.
Yeah, this was what I came here to suggest and had to see if anyone else had beaten me to it ;-) Using the username as the token means no "command" of ambiguous intent is being copy pasted but resolves many of the usability issues.
I am not sure about this. First, there is a built in MITM attack here. First time you connect to sshd, the server has no idea who you are. I suppose this could be mitigated by using HTTPS as the out-of-band channel for verifying client and server fingerprints. Second, ssh keys are somewhat limited. You can only have one public key for a private key. You cannot embed identity info in the public key. I would much rather see hog keys used for this. For developer types that should be just as easy. For real people the UI would still have to be developed but could actually include useful features, such as user identities.
How do you know that the sshd you connect to is authentic and not a MITM? More importantly, how does the sshd know that the incoming connection is you and not a MITM?
How do you know that the sshd you connect to is authentic and not a MITM?
You'd need the server fingerprint on the site (served over https, of course).
More importantly, how does the sshd know that the incoming connection is you and not a MITM?
But that's the point, there is no "you" to authenticate, since you're signing up for a new account. The sshd generates a token URL and then stores your fingerprint with that token. Then you can use that token to login to the actual site and fill in your information.
If you're MITMing someone, the server shouldn't care, it's the client's job to make sure it's talking to the right server. See above.
>I am not sure about this. First, there is a built in MITM attack here.
This always sounds like a NSA shill argument to me. Sure, you can MITM, but then, you HAVE to MITM on the very first request of every user to make that work. That's much more expensive than vacuuming up passwords server side with gag orders.
>Second, ssh keys are somewhat limited... You cannot embed identity info in the public key.
That's ridiculous. Who would want to? You are looking for an authorization solution. SSH is for authentication.
Who said MITM isn't a threat? I'm talking about the difference between targeted surveillance (MITM) and dragnet surveillance. If you think you have any solution that would beat the NSA at targeted surveillance, you are dead wrong.
In the meantime, not trusting a third party server with a password would go a long way toward defeating dragnet surveillance. Read the reports. NSA defeats your SSL routinely, and they are MOST INTERESTED in the part where you supply a server with a password. They can only bust SSH some of the time. There is a very real security difference between the two.
Snowden got exiled bringing you the news. At least have the decency to read it.
Except the NSA almost certainly has the ability to MITM ssl connections, which means the whole CA thing doesn't gain you that much if NSA is what you care about.
Of course I can MITM something without CAs if you're on public wifi, provided I intercept the very first connection, so it's a valid question for defending againt less sophisticated attackers.
Adding "-vvv" to the given SSH command, it looks like it uses the first SSH key given. I happen to have quite a few for various services. I suppose you could offer up one specifically by running
Yes, the author clearly has, because he talks about it in his conclusions. The exercise, then, is one of re-discovering why things are the way they are.
So why are people trying to replace the algorithm, when what we need to do is to replace the UX? Patches could be submitted for both Chromium and Firefox to give them a better client-cert flow, but I don't see anyone working on the problem; just a constant parade of restyling on tab-strip and toolbar and notification banner UX.
I mean, ideally, a client cert would be treated pretty much exactly like a cookie: generated on first connection to a website and automatically stored by the client; synced across browsers; etc.
And, I mean, you could treat a client cert like a permanently logged-in account credential, but it'd be much better to just treat it as essentially an unforgeable browser fingerprint+session ID: something that just "pairs" the client and the server, but where you still have to log in after that pairing process, but only once. (As long as the cert gets synced to another browser, that browser is now logged into the site, because it's now sending the fingerprint of a "session" that's logged into the site.)
With such a setup, you would be able to just "clear" the client cert (like a cookie) to log out, and then get issued a new one and log in again on that one, if you wanted. You wouldn't have to worry about losing your cert. You'd be able to have multiple devices with multiple client certs that are each separately logged into the same account, if you wanted. But, since each client cert would be associated with an account, and vice-versa, you'd be able to revoke a client cert's access to a particular account: to, effectively, log another device out remotely.
The nice thing about this workflow, actually, is that each and every HTTPS site could always issue each visitor a client cert, just as we currently generate server-side session IDs for each visitor. It'd become a best practice to have client cert issuance on by default at the load-balancer level, like HSTS is now. (And load balancers can just translate client certs into a plaintext "browser fingerprint hash" HTTP header, that can be detected by anything further along the request chain that currently knows how to deal with cookie or URL session IDs.)
Because HTTP is stateless and session-less, and cookies are a hack to make it stateful. We introduced the notion of a session where we already had one, in the form of TLS sessions.
Note that it would also make APIs much simpler by moving the authentification, authorization and session logic in the certificate, where it actually already is.
> We introduced the notion of a session where we already had one, in the form of TLS sessions.
But that would only apply to HTTPS. An extension to HTTP itself was necessary so state could be maintained for both HTTP and HTTPS. Especially in the mid-1990s (the era when cookies and HTTPS were introduced) when acquiring a CA-signed certificate was cumbersome and expensive.
That is true, but I'm one of those guys who believe HTTP-only should die.
Even without going as far as that, I believe that as soon as you have to manage some kind of session you're going to have private data flying around, and that should be protected in TLS.
A correctly done client cert provides mutual authentication (the server's ssl cert and priv key it had when you signed up with it is the one needed to sign in in the future).
Somehow I don't qualify a misplaced apostrophe (either by human error or bad autocorrect) on the same scale as choosing a fundamentally bad database or failing to understand/take advantage of powerful technologies such as client side certificates.
We use client certs in the company where I work to authenticate into all internal pages. Once the infrastructure was in place (an easy way to generate a certificate to any new colleague) it has been a breeze to setup and use.
Server side, it is 3 or 4 Nginx lines. Client side, people only have to get the cert, click it to open, and install it on Chrome. After that, they are happy they don't have to remember any password for our different services.
I think we could implement it for our users if we gave the option to "login using cert" and "send me a new cert" (to the email they used for signin). People need minimal training but they don't have to be computer literate to do it (half of our company are not, and they found it better than passwords once the system was in place).
SQRL is vulnerable to phishing and spoofing attacks due to the lack of mutual authentication. I can send you a phishing email with a link to a webpage that looks like PayPal, on mynastydomain.com, and then display an actual PayPal QR code to you. There's no complete solution to this all the time you're passing tokens with an unauthenticated association over an air gap. The IP binding proposal is a just a disaster for a number of reasons.
This system provides mutual authentication and the ssh command (that the user will surely copy and paste) doesn't contain any tokens. The session token is produced after authentication has taken place.
It's there buried in the discussions somewhere. The basic (and old) idea is to shove the IP of the user (or spoof as the case may be), as seen by the web server, in to the QR code and then tie it to with the session token using a MAC. When the SQRL app passes the signed SQRL data back to the web server it passes this back as well. The server can then reverify the users IP (remember they're now using the app on another device).
IP binding is worth doing but there's no way for the app to warn the user that the IP differs. You have to trust the server implementation of SQRL (which despite what Gibson claims, is actually fairly complex on the server-side)
Other issues are discussed on the page you linked entitled "Details and Limitations of IP-based MITM detection"
It doesn't, but your SSH server wouldn't be able to produce a valid token for Paypal.com.
Btw i'm not liking this SSH solution either, I was just pointing out that's still better than SQRL, which is awful in that it has exactly 1 advantage (it protects users against password reuse) and many nuanced flaws.
Host authentication is a job of SSL though. So you should simply not log-in on a website if it cannot be verified that it is actually Paypal. This holds for both SQRL and passwords and seems to be an independent issue.
It also applies to this case because the ssh server is supplied by the website.
This question might be hilariously naive... but why don't web browsers make signed requests, just as web servers make signed responses?
If I create an account on a website, then associate a public key with my account, shouldn't the browser be able to sign each request with my key? The website then wouldn't ever even have to deal with cookies or sessions as long as I was logged in to my browser.
Or better yet, if my key is publicly known and trusted, couldn't a website know who I am before I even create an account? It could skip the signup process entirely.
> but why don't web browsers make signed requests, just as web servers make signed responses?
They can. The WebID people at the W3C are trying to make a standard out of this, that would allow for federated authentication in which you can securely share data with websites you visit. (They're not getting terribly far - huge portions of the stack are missing around the subject of "tell a server you own that another server can access <these bits of data>, where the other server is identified using a WebID".)
The downsides? Basically, the UI sucks and is inconsistent between browsers, up to and including failing to let you log out or have multiple accounts on a website.
You can use client side (x509) certs for auth over ssl/tls (and also for ssh - in theory at least. I've never tried that). Cacert.org uses client certs for authentication in the web app/page for generating certs for example. I believe the rest of the session is handled similarly to regular ssl/tls (in essence negotiating a shared secret to use as a session key for symmetric encryption, along with some kind of signature to build an authenticated cipher -- if a symmetric authenticated cipher can't be negotiated).
My bank used x509 certs for a while -- but in the end it proved too hard both on users/support and on developers (catering to all browsers - as cert management has to be integrated in the browser ui/chrome -- and so is different for every browser).
Another thing people seem to miss here is tha ssh also has its own cert scheme. So you could advertise a server cert in dns - and the client would only need to trust the CA cert (a single cert for an entire organizations ssh servers).
That features has been in substantially every browser for years, I am pretty sure IE 5 had it.
The problem is that the UI for client side certificates is of-putting, inconsistent and terrible in every single browser (also people expect to be able to use your web service from more than one computer).
Well, it could maybe be an opt-in feature, like when a site requests to use your webcam or microphone. A small prompt that the server is requesting to access your identity or something.
I love SSH, but I don't want to actually use SSH for this. That's not the important part. All I want is key-based authentication, and that doesn't need SSH. I want to be able to plug a security token into a USB port and be logged in, without even having to click a sign-in button. Reformat, reinstall, reboot, plug in security token, launch browser, type news.ycombinator.com and I'm already logged in.
You just have to touch the USB security token to have it release the key. I believe the idea is that malware can't really trick you into touching it, so it's more secure.
The author makes a very good case for why this approach won't threaten password-based authentication, including:
...hard for first-time users to get right:... The private key being unlocked and available via ssh-agent.
For this to be convenient for daily use, ssh-agent is essential, and that could expose naive users to compromise. I know enough to disable ForwardAgent in my personal config by default and generate site-specific keys for hosts I can't trust, but that's beyond most ordinary users and even many of the technical professionals I deal with.
It's a shame PGP was the target of so much persecution when it came out. Maybe by now we would have worked out the key exchange problem and would all be enjoying personally encrypted communication on the Internet. I sometimes feel that any attempt to move beyond passwords without realizing that ideal is doomed to failure.
I think one solution would be for the browser (or an extension) to expose an SSH agent API through the DOM, but gate access to that agent with local UI that confirms operations. "Facebook.com would like to view your public identity information. Continue?" with an "always allow for this site" option. "Facebook.com is requesting a signing challenge to verify your public identity. Allow?" etc. It could even include an identity manager which would allow you to generate different identities to present to different sites on the fly. You could have the option of having the requests pass through to your SSH_AGENT_SOCK (still gated by UI though) if you want, or you could just let the browser maintain its own independent agent (or potentially a combination).
So you have a situation where the client has already authenticated the server using TLS, and now the server wants to authenticate the client too. This solution wants to do that by setting up a new channel, authenticating the server /again/ and then also the client.
I love the idea, I think I'll implement something like that to automatically log into the few services I host on my webserver: on machine boot and every hours `ssh webserver echo $SOURCE_IP > token`, and on the web app, if token is not too old in source address == $(cat token), then auto login.
That technique would useful when user wants to add public key into his github profile (or similar service). Instead of asking user to copy&paste his public key, site might ask him to enter `ssh github.com authtoken` in terminal.
So then you have two avenues for attack, if someone wants to inject their own public key into your account - the Web UI and some form of custom SSH server.
Besides which you would need more than just "ssh github.com authtoken" - it doesn't identify who you are (thus knowing who to save the public key for)
> So then you have two avenues for attack, if someone wants to inject their own public key into your account - the Web UI and some form of custom SSH server.
That's correct. SSH misses PKI and github can't sign their ssh public keys with trusted authority. If someone intercepting your traffic, he can redirect your connection to 22 port to malicious ssh server and save his malicious public key to github. To prevent that, github must present ssh fingerprint into their web page and user must check that fingerprint with one he can see on terminal. Thanks for clarification.
> Besides which you would need more than just "ssh github.com authtoken" - it doesn't identify who you are (thus knowing who to save the public key for)
authtoken is supposed to be an unique identifier and github server knows that it's associated with your account.
> github must present ssh fingerprint into their web page and user must check that fingerprint
yet another case where DNSSEC secured SSHFP records would automate this. However, given that people currently commit and push passwords, private keys and who knows what else to places like GitHub, it seems unlikely these people would recognise why a connection might refuse (e.g. because of an invalid fingerprint) anyway
> authtoken is supposed to be an unique identifier and github server knows that it's associated with your account
ah sorry I thought "authtoken" was meant to be some command to run on the server.
Frankly I think things like adding a public key (whether to GitHub or a system that allows SSH logins) over the internet, are probably safer behind a double factor auth system (e.g. password + otp or client cert + otp) - the people who need to use it can be shown how to copy their public key quite easily (if they can't open Terminal.app, type "cat ~/.ssh/id_rsa.pub | pbcopy" and then paste the result into a web form, can they really handle Git, or even SSH for that matter?)
Interesting; this could potentially solve a pain point for me. I run a small set of services for some research collaborators and have been phasing out passwords, so users are authenticated solely via their public keys. That works for most classic "Unix server" stuff. But for the use-case of a remote GUI for scientific computing, the classic solution of network-transparent X is gradually losing out to a more iPython-style solution of a browser front-end. In which case I need to authenticate the same people in their browsers, and it'd be nice to do it via the same public keys.
That's a retarded retort. In the SSH case you have ONE password to remember. Not one per website. Furthermore, the password never leaves your machine. Log in to a hundred websites with SSL and a password and the NSA comes along, collects all your passwords server side, knows which ones you reuse, knows your password generation patterns, everything. You are completely nuts if you think these two things are the same.
I love this idea, but I have some issues with their chart, particularly as related to authentication via email.
A developer with a few minutes of thinking time could get around the "red" spots associated with email authentication, and fill in any missing dots pretty easily.
And for better or worse, email is currently one of the better ways to create an identity for the masses, and it's also one of the few systems that your average user can justify the pain of setting up 2 factor auth on.
The only thing better would be to use text messages, if you could come up with a way to lower the impedance for the user.
> You can see where this is heading—a half-baked reimplementation of TLS client authentication with all the same usability nightmares.
Yeah, that's what I was thinking -- why would you do this instead of just using TLS client auth? TLS client auth is a usability nightmare -- but I'm not sure it's any _worse_ than what's in OP.
A brilliant Idea; as far as usability for a normal user is concerned, How feasible is it to modify a browser to take the command directly via the address bar, and internally a thread can handle the ssh part, redirecting the user directly to the newly generated link ?
Would be feasible as a firefox addon since those have the same privileges as the browser process itself.
Afaik chrome's extension APIs are too restrictive to allow for something like that.
Not that hard (I've written Firefox extensions that run external programs), but what would you gain over SSL client certs, which are already supported?
A beautiful simple implementation for signing into websites using SSH. I think this idea can be very helpful for developers, perhaps give them good scripts to hot reload a patch on production servers.
When SSH clients first connect they ask the server for a pseudo terminal (PTY), to enable terminal colors, clearing the screen etc.
In this case the SSH server declines that request because it simply wants to send back one line of text. The client falls back to text mode and works fine, but issues the warning anyway. You can suppress it with `ssh -T mars.vtllf.org` like I did in the demo video.
On the surface, it means SSH tried, and failed, to initiate an interactive session. In context, it means that either the developer has disabled new users, or this is simply broken at the moment.
This is brilliant. Here a full-fledged solution: the same way every OS supports users, it should support identities attached to the user account (~/.ssh). Upon OS installation / user creation, it would prompt the user to either generate new identity, or import existing one (from a flash drive, from a cloud, etc.). This would be the key pair. And the browser would automatically use this, without any terminal session or any such non-sense. Brilliant, right? No, because user accounts are commonly shared: here, can I check my mail on your computer? Yes, password won again :P
SSH is one of those things you get forced to deal with a couple times a year for some irritating task, and it involves firing up Putty, figuring out where you left your key file, trying to remember how to actually load that key file, and a bunch of following steps on some website just to get connected to what you were trying to connect to.
It is not something that I love.
If you force me to use it just to log in to your website, I will decline to do so.
I think I found the Windows developer :). It is loved by pretty much everyone but Windows developers because it really is the best thing since sliced bread.
Funny thing is that many windows developers and administrators like to use RDP, while *nix users generally don't like it at all (usually because RDP doesn't allow easy scripting to automate routine tasks).
Yeah, the Windows-based alternatives (RDP, VNC, or even shudder something like GoToMyPC) are waaay better for quick dead-simple system administration. /s
SSH works great for systems that didn't have to have networked multi-user support hacked in later in their lifecycle. Systems that let you actually get stuff done without having to paint a whole GUI environment to do it.
I can automate running "git pull && mvn clean package && deploy-tool-of-choice" across N servers. I can't really automate "Okay, wait for the GUI to paint and then be sent across the wire, then click Start, then click on the GUI Git tool start menu entry, wait for it to load, then click the 'sync' button, then wait for it to finish, then open a GUI(!) command prompt (in my GUI environment!) and cd to the right directory, then type 'mvn clean package' and wait for it to finish, then click Start, then click on the deploy tool start menu entry, then wait for it to load, then click 'deploy'".
The fact that there are a number of tools that try to hack that sort of behavior in by emulating pixel-scrapeable virtual displays and mice/keyboards is a testament to how shoddy the alternative to SSH is.
Just the fact that you think SSH equates to Putty is laughable, and shows your ignorance. .NET and Java developers, at least the ones who seem to think anything non-Windows is bad, don't understand SSH. When you don't understand it, you don't see the value of using it, which means it doesn't get installed on your servers, and you are left thinking it's only an 'irritating task' kind of thing.
I use SSH on a daily basis. Right now, from work, I have five SSH connections open, one is tunneling, one is tunneling and providing an interactive shell on one of my personal machines, one is connected to the live environment for server monitoring, and two are connected to the development environment. I'm running everything from applications, to the command line to my IDE (vim) all in SSH sessions.
Just because you personally don't understand a key bit of internet technology doesn't mean it's the same for everyone else. There are those of us who see a valuable tool and use it properly.
I'm sensing a bit of ignorance or at least unrelated aggression toward .NET and Java developers. I'd actually say that ignorance of SSH is more related to application vs server developers, and since Java is one of the most popular server platforms, it seems unfounded to even bring up Java developers in your diatribe.
Note that although PuTTY has a graphical user interface, it is hardly user friendly. Using the command-line ssh (with a decent shell) is much more comfortable.
I use it for exactly one purpose: authorizing the "ghosting" of a customer account. ("Log in as this user.") Putting that behind SSH means that anyone authorizing a ghosting has both a blessed SSH key and the password to it. Even rooting my session on our admin app, via reflected XSS or stealing my unlocked laptop out of my hands, doesn't get you that. (Our admin app, by design, is less-than-all-powerful. A malicious admin session could do a heck of a lot of damage to our business but, critically, would be unable to access personally identifying data about customers' clients.)