In terms of possible compromise, I rate the possibility that my phone is compromised way higher than my laptop. Adding a factor is a good idea in terms of security (not in terms of availability and ease of use, but definitely in security), but replacing it entirely... No.
Why'd I even want to remove id_rsa? What's the problem being solved here?
The problem is that your private key stored in ~/.ssh/id_rsa can be read by any user-level application. The private key is even vulnerable if you passphrase encrypt it. See our deep dive into the threat model: https://blog.krypt.co/why-store-an-ssh-key-with-kryptonite-9...
This is why we move it off the computer and onto a phone. The security is comparable to using a Yubikey. I'm not sure why you say your phone is less secure than your laptop. On the phone, apps are sandboxed and the private key never leaves the Kryptonite sandbox.
Neither Google nor Apple have root access on a Yubikey, nor does that key have some sort of wireless transmitter included which would allow for unnoticed data transfer to or from the key.
Furthermore, it is nowadays largely trivial to set up sandboxing within a single user (using SELinux, Apparmor or whatever else) or to use multiple users and classical privilege separation to achieve the same effect.
It is also telling that your "Threat Models" in the link above do not discuss attacks against the phone at all.
Edit to add: You currently also do not have the ability to use my keys. If I were to install the app (and set it to auto-update as suggest so vigorously elsewhere), all it takes is for a tiny little update by you with no public oversight to own every server I have access to. How is that possibly improving security?!
Thanks for the one sane post in this thread, as opposed to the iPhone is more secure than your Linux desktop FUD that tptacek seems to be spreading here.
If you have a passphrase encrypted key, you can see this for yourself:
$ eval `ssh-agent` # make sure an empty agent is running
$ ssh user@server # enter passphrase on first login
$ ssh user@server # passphrase no longer needed
This is wrong. Keys not added explicitly with `ssh-add` to ssh-agent will not be available unless you explicitly enabled AddKeysToAgent in ssh_config. [0]
Forgetting something like enabled config options is nothing uncommon for a user, but it doesn't exactly speak well for a company making an ssh-agent alternative as a product. Also I tend to agree with the poster adjacent to me which emphasized the inadequate threat model analysis in your blog post, including entirely ignoring or failing to address critical points.
I also share the opinion that the threat model is flawed, biased. You define it "deep dive" but you didn't even scratch the surface of the issue.
> At the core, phone operating systems are built with better sandboxing than their desktop counterparts. This is why security experts like Matt Green recommend phones for your most sensitive data.
Having a better sandboxing is not the same as having a "safe sandboxing". How secure is the application once an attacker is able to compromise the sandboxing?
IMHO the rest of the threat model "deep dive" has no value once we take that attack scenario into account.
What about a non-dictionary based 20chars password protecting your private key, or storing your SSH key on an OpenPGP Smartcard in a USB token, a Yubikey or a Nitrokey (www.nitrokey.com)?
I believe that it would be much more secure than application whose security model is based on the sole sandboxing.
My phone has been on so many more networks, does not get properly updated, and is just a personal device that I use for lots of stuff and trying out lots of apps. My laptop consists of 98% software from repositories, has disk encryption, is updated very frequently, and has perhaps 2 or 3 applications from outside of the repos.
A phone is much more prone to theft and I don't password protect it because if I'd have to input a 10+ character random password every time... I might as well not have a smartphone but just pull out my laptop every time. Since I carry it around all the time, there is little opportunity for unauthorized access...until it gets stolen. And then I don't want to lose all access to my infrastructure.
Unrooted Android and iOS are much more secure than Windows or common UNIX variants. Mac OS X has similar sandboxing, but few people only use sandboxed apps. I guess the same can be true for Windows with store apps
I don't think OS X's sandboxing has seen nearly as much scrutiny as the iOS/Android counterparts. An OS X sandbox escape buys you barely anything since the vast majority of apps don't come form the Mac App Store and don't bother enabling it. I wouldn't put a whole lot of faith in it.
Vast majority? Well, for the Hacker News audience, sure :) But the App Store is popular enough for Apple to require a checkbox in the settings for installing outside apps, like on Android.
You're confusing the App Store and Gatekeeper. If you register for Apple's developer program they'll issue you a certificate that you can use to sign and distribute your applications outside the App Store, with no input or restrictions from Apple, while bypassing that checkbox. These applications are the majority that aren't required to be sandboxed, and rarely are.
IIRC (haven't used modern Macs in a while) Gatekeeper has three modes — allow App Store only, allow App Store + signed outside, allow all (unsigned) apps. Didn't they switch to the first one by default??
The most recent OSX version (Sierra) made the change of hiding the option that totally disabled Gatekeeper, but the default hasn't changed. I have a Sierra VM I set up about a month ago, and I just checked the setting and it's at "App Store and identified developers".
Why'd I even want to remove id_rsa? What's the problem being solved here?