There have been several calls in recent weeks for a nice UX wrapping GPG. I'm thinking of what Cryptocat aims to be, but with a sound implementation resting on GPG. The crypto community seems supportive of this idea.
I'm not saying I'd be the one to implement this, but at the vert least, I'd like to start collecting ideas. Maybe I or someone else could realize them eventually. So let's talk. Please post your thoughts on what would make for a good, user-friendly, and secure wrapper around GPG. Thoughts from security specialists would be especially appreciated.
I'll get the ball rolling with a few basic requirements:
* No roll-your-own crypto. Absolutely none. All algorithms must be provided by a mature, universally trusted library. (And those algorithms must of course be GPG, since that's the whole point of the project.)
* Don't use any libraries that, while sound, expose a low-level API such that we could unwittingly call the API in unsound ways. An example of this would be OpenSSL. (Just an example; obviously OpenSSL != GPG.) See this for a discussion of the library misuse problem: https://news.ycombinator.com/item?id=4779015
* Users should have to understand as little as possible about the inner workings of PGP/GPG. However, in any instance where hiding details would compromise security, details must not be hidden. For example, people need to understand the implications of signing someone's key. We don't hide that part from them. But they shouldn't have to fiddle with text files and command lines. We do hide that part.
* A "good user experience" is more than just a GUI. We already have GPG GUIs. User experience doesn't start when the user first boots the program. It starts at the moment a person first hears about GPG and wants to learn more. Thus, good UX is as much about documentation (including the product homepage) as it is about software.
My prediction is that the kernel of the idea that will make GPG usable is to dispense with the idea of a single keypair, and instead build features that generate ephemeral keypairs on the fly. Make the system workable for users even if they don't understand what a keypair is. Some of what makes OTR effective can be implemented using PGP as the underlying cryptosystem.
When one suggests replacing OTR, one tends to get an earful about the importance of forward secrecy. I think forward secrecy is very important for systems in which there are extremely high-value keys that are "stationary targets". I think forward secrecy is less valuable in desktop applications, where the attacks that would cough up a persistent key would tend to be devastating to the whole cryptosystem anyways.
It's also worth saying that PGP isn't a particularly great cryptosystem. "Modern" PGP predates a lot of important stuff in crypto. But it's a very well studied cryptosystem.
There are strong cryptographers who are working on much, much better systems than PGP. The problem is that those systems will compete with amateur systems and the winner won't be chosen by security. At least with PGP, we know what we're getting.
I would if I could, but another distinction between real cryptographers and amateur ones is a desire not to publicize things until the design is trustworthy. I think you'll have to take my word for this (but I'll try to think of one I can share).
I understand this discussion is about avoiding snake oil, and only using good quality trusted respected systems, and using them carefully, but making them easier to use.
Some examples from PGP include Bob signing Ann's key without sufficient verification, or people publishing their private and public keys by accident.
Remembering that many people are just hopeless at security ('123456' used as passwords; people clicking through browser certificate warnings; people installing malware and ignoring OS warnings about untrusted sources) it seems a reasonable point to make: "Secure products can be made easier to use, and if they are both good and easy to use it will enhance security".
I know tptacek was talking about systems still under development, but I immediately thought of DJB's NaCl (http://nacl.cr.yp.to/) when I read that statement.
> It's also worth saying that PGP isn't a particularly great cryptosystem. "Modern" PGP predates a lot of important stuff in crypto. But it's a very well studied cryptosystem.
Is there anything available today that you'd recommend over PGP, regardless of usability or ubiquity? e.g., if one has to include crypto inside an internal-use only email product, that requires both encryption and/or signatures - what's an alternative to PGP that would be considered reliable?
You're right--that would be a major boost to usability. One question though: Does this undermine the security of PGP in terms of identity verification? I mean, if I'm receiving an ephemeral public key over the wire, how do I know it's not being generated by a man in the middle? With semi-permanent, published keys, I can put my trust in the signatures. But I'd imagine that the scheme you're proposing doesn't have signed keys. Or am I mistaken about that?
> It's also worth saying that PGP isn't a particularly great cryptosystem.
Do you feel the best move is to push forward with PGP, use something else now, or wait for newer systems to be better-studied?
My other question about ephemeral keys is whether they're useful for something like email. I understand how they work for transient conversations like HTTPS or chats (although if you archive the chats forever you'd have the same problem). Would you store something like a key version as KeyCzar does and keep multiple keys around, or periodically have to recrypt all archived data as with key rotation? Or have a single key(pair) that is used for archiving data which is different to the one used in transmission?
I'm glad I found your comment, I just pushed something like what you just described to one of my repos yesterday. Hear me out, I'm not self promoting myself out of context here.
I'm currently working on an OpenPGP integration for the Roundcube webmail project and have so far added functionality from the OpenPGP.js library. The pros of this is of course usability and that no external applications are necessary, the cons are, amongst others, what you just wrote above.
To be able to support briding local GPG binaries and keyrings into graphical browsers without exposing any critical information I threw together an HTTPD which listens on the client's localhost. The concept is already proven to work, now it's a mere matter of implementation. It's based on the PyGPG library which wraps GPG into Python and is compatible with both Windows and *NIX systems as long as they can execute GPG and Python (which they can).
It's still a work in progress but currently supports key generation and key listing in response to HTTP requests. Through cross-origin resource sharing users can specify which domains should be allowed to speak to it in a simple text file separated by line breaks.
I can conclude that what you are requesting is actively being built and partially already exists but still needs to be put to use. Hope you don't view this as shameless advertising, because it's not. I'm only responding because your ideas are spot on what I pushed yesterday.
My plugin does nothing crypto based on the server side, everything is happening locally in the browser through JavaScript and in the future there will be an additional driver for performing crypto opts in GPG binaries as described in my comment above.
I don't mean to be harsh but server sided crypto is far from a good idea. It provices violent regimes, such as America, a technical ground to force hosts into backdooring their server sided crypto. Anything alike must be done on the client for safety and privacy to be ultimately achieved.
Given that practically all (to at least 4 significant digits) of my mail arrives at the mail server un-encrypted,I think there's still some value in encrypting it before the server stores it. I'm setting up some perl scripts to help exim encrypt to my public key any non-encrypted mail before it delivers it into the local mailbox. That's still subvert-able by anybody with enough power to lean on the hosting company, but then all that mail was interceptable in transit anyway - at least I've made sure that stored archives on the hosted server aren't in cleartext.
I'm also considering setting up some scripts for outbound mail - to automatically encrypt any (non-encrypted) mail I send if I've ever received encrypted mail from the recipient. Have the mail server keep a record of email addresses and public keys, and auto encrypt where possible.
(And in regards to in-browser crypto - I'm unsure there are strong enough guarantees of security in javascript to make me entirely comfortable having my private keys and passphrases hanging around in the process space where rogue javascript and/or plugins might be able to scoop them up…)
That's a valid point, but you won't be effectively encrypt incoming email in the layer of a webmail client. You'd be better off incorporating that before the message is even saved to disk - in the mail delivery of your MTA. So once again it wouldn't be PHP (I hope for your sake!).
"(And in regards to in-browser crypto - I'm unsure there are strong enough guarantees of security in javascript to make me entirely comfortable having my private keys and passphrases hanging around in the process space where rogue javascript and/or plugins might be able to scoop them up…)"
Your sarcasm is entirely valid, but you didn't actually look at the project that you are criticizing. The entire point of what I linked in my comment is that nothing critical should be exposed to the JavaScript, just an API that it can interact with to send commands to: such as keygen, verify this message, send cleartext and receive ciphertext in response, etc. You're preaching to a believer here, :-)
Your ~/.gpg/ is -not- accessible to JavaScript. The interface, the GnuPG binary, is. That's the point. Now we can both agree that exposing private keys in such an API is a bad idea.
Oh, and I hadn't intended "sarcasm", apologies if it came off that way - I'd expected to be more likely accused of paranoia... I've actually got PGP and encfs installed and testing on my iPhone & iPad, but I've not managed to convince myself it's "safe" to put real (as opposed to "just created to see if this works") private keys for either of them onto an iOS device, with all the lack of transparency about who's actually "in control" of those devices.
While I'm reasonably sure GPG/encfs on my phone will reduce my exposure to "dragnet style, intercept and archive everything" surveillance, if the NSA are after _me_, I've no doubt that there are people at the NSA who've already worked out how to coerce Apple into pushing a software update to my phone that sniffs around with root access looking for things that look like private keys, and keylogs things that look like passphrases - and ships them all off to Utah.
(And, truth be told, I strongly suspect all my Windows and Mac OS X boxes would fall in exactly the same fashion, and it wouldn't surprise me too much to find the firmware in my bios or USB bridge or ethernet adaptor or hard drive on my linux boxes is equally traitorous and ready to "sell me out"…)
On the incoming email encryption - yeah, that's in the MTA not the webmail software - having said that, I'm basing what I'm doing off this: https://grepular.com/Automatically_Encrypting_all_Incoming_E... at least partly because Perl is my goto hack-shit-together langiage, I could _easily_ imagine a lot of my cow-orkers choosing to do that in php.
And yeah, I hadn't followed your links, and made poor assumptions about your project. I just briefly skimmed through some of them and I've got a question - have you got a way to protect the passphrase from ending up somewhere the browser can see it? (or, if the decryption is "passphraseless" from the browsers point of view, how do you ensure rogue javascript could pass encrypted data in and retrieve cleartext?)
The questions that you raise is of course what I am interested in discussing. I can't think of any way that PGP/GPG protects you against keyloggers or a pre-infected computer. I agree that they are relevant threats but my question is if it's up really to the developers to prevent rogue JavaScript in third party software and user's localhost. The same threats can be applied on all existing cryptosystems, as for with one-time pads where someone could look you over the shoulder - but that itself is not considered to break the underlying strength of the design. Or another example, how does Enigmail for Thunderbird protect you against having code injected and keys stolen? I don't think it does, but Enigmail isn't considered insecure. I think the questions are fair to raise but I see them raised far more often when people confront new ideas in comparasion to established practice, which I truthfully consider is a bit unfair judgement.
One of the factors which can narrow the scope of attackers is to use products like crypto stick, but then again what is preventing a computer from being rootkitted and having it's keys stolen as soon as they are exposed in the system?
Developers can of course only address weaknesses in what they have control over. We can't stop your computer from being infected by neither rootkits nor rogue JavaScript from plugins that you have volontarily installed. My advice would be to be careful and audit everything that may be a threat in order to at least try and minimize the risks. Unfortunately I don't think many users do that but it's not something we as developers can address and prevent.
The dilemma here is the same as with filesharing: if it's accessible it can be copied and transferred. There's no patch against that.
You described in your post an HTTPd process on the local user's machine that will make shell calls to their installed GPG binary. My PHP port merely presents one alternative way of doing that on any system that supports PHP - without the need for the user to install and configure GPG binaries, and without the need for the HTTPd process to have permission to make shell commands.
If you see encryption tools that others have written - and all you can imagine is implementing them in insecure ways, then that's your own issue.
Check the links posted in the comment you replied to, it's not cryptography in JavaScript: it's JavaScript posting to a httpd on user's localhost which bridges GnuPG. It's not for doing cryptography in JavaScript, it's for doing cryptography in GnuPG and passing it through a httpd which the js talks to.
But yes there is JS crypto in the project, as a planned separate optional driver.
My biggest hesitation here is that you're still trusting the server. Which, not coincidentally, has always been one of the biggest objections to JS crypto. That is, if the server is compromised, it can serve malicious JS, and it can just as easily steal any data that's being encrypted server-side.
To me, one of the most important things about PGP is that the plaintext and the encryption process are entirely in your control. (At least to the extent that you control your own computer.) You lose that assurance if you do server-side encryption.
2) you want to encrypt sensitive informations. You send them to (localhost) B
3) you receive encrypted data
4) you use them through server A
Aren't you sending sensitive informations though javascript served by server A? Didn't you just loose the security that you wanted by encrypting on localhost?
"Didn't you just loose the security that you wanted by encrypting on localhost?"
No, the sensitive information isn't being protected from localhost but from server A and anything else on the path between user and message destination. localhost is the user. For clarification: GPG is on user's localhost, not the server.
1. Alice uses a web app served by server A
2. Alice wishes to send an encrypted message through the web app served by server A to Bob
3. Alice writes the message on her client sided browser
4. Alice finishes and clicks "Send"
5. The web app's client sided code, JavaScript, sends the message to Alice's pygpghttpd listening on localhost
6. pygpghttpd responds with the ciphertext to Alice's web browser
7. Alice's web browser replaces the cleartext content with the encrypted content
8. The encrypted content is sent to server A to be routed to Bob
---------------
1. Bob receives encrypted message from Alice on web app served by server 1
2. Web app's client sided JavaScript sends the encrypted message to Bob's pygpghttpd listening on Bob's localhost
The issue with security is that you're not competing with other secure solutions, but with the convenience of unsecured. I've read that even journalists that face real risk communicating without pgp don't bother because it's difficult to participate in something that requires others to also participate. There's also the issue that email is only one (slow) form of digital communication.
Cryptocat (ignoring security for a moment) is an attempt to solve the accessibility problem in a way that actually works. Widespread adoption works well with some central aspects - and if we're talking about dragnet avoidance for most people then this is probably a reasonable compromise.
If Google generated public/private key pairs for each gmail user tied the keys to each account and then used the public keys to encrypt all email (taking a similar approach to using real OTR for google hangout) then all google->google communication would have a layer of protection from unwanted ISP monitoring. Granted you're still trusting google with the private key and warrants or some request could still reveal it, but you'd actually have wide scale use of the thing. Facebook could do something similar for their chat and then you'd have most of how people actually communicate covered across multiple devices in a way that protects against sweeping surveillance.
I'm pretty ignorant about most of this - am I missing something obvious that would prevent this from working? Obviously you're stil trusting the companies, but we were doing that anyway.
Cryptocat is an attempt to solve the accessibility problem that essentially handwaves away many key problems in cryptographic security. It's a system that is simple for users to use because it's simplistic, and is thus less secure even than some other very-simple systems.
For instance, OTR (as implemented by IM clients) is an example of an extraordinarily simple cryptosystem (I'd argue too simple) that at least provides for a notion of persistent keys.
That makes sense and I think my comment was a little ambiguous. It wasn't so much an endorsement of cryptocat itself, but the idea that using a centralized system that supports encryption may be a reasonable compromise to protect against indiscriminate collection by third parties, and something I think existing companies that handle most communication already could generally implement.
> The issue with security is that you're not competing with other secure solutions, but with the convenience of unsecured.
True, but that doesn't mean the convenient and insecure apps have to totally dominate the market. I'd be willing to bet there are plenty of people who want true, reliable security, and are willing to take on a little hassle for it.
> I've read that even journalists that face real risk communicating without pgp don't bother because it's difficult to participate in something that requires others to also participate.
The same could be said of the early days of email: It's only useful if other people are also using it. But we overcame that chicken and egg problem. The same could happen for PGP. It's a cost-benefit equation: People need to be sufficiently worried about privacy, and the friction of PGP needs to be sufficiently low. There must be some tipping point.
> Cryptocat...is probably a reasonable compromise.
Not if it can be cracked with the processing power of a mere desktop computer. I've heard claims to this effect. What are the use cases for a sort-of-secure app like Cryptocat? Certainly not hiding from sophisticated adversaries. So it's just about protecting your data from casual users then. But in that case, I'd argue that even your basic instant messaging client is adequately secure, in that a casual user doesn't know how to play man-in-the-middle.
> If Google generated public/private key pairs for each gmail user
Either you're encrypting server-side or client-side, in that scenario. If server-side, you're not much better off than just using HTTPS. Google still has your plaintext. So if you trust Google, just use HTTPS. And if you don't trust Google, then encrypting server side is pointless. Client-side encryption isn't currently viable: http://www.matasano.com/articles/javascript-cryptography/
I agree with your first two points and if the complexity of using pgp were reduced to that of simply using email then I think it'd become more widely used.
I didn't mean to imply that cryptocat itself is a reasonable compromise (I don't know enough), but that a centralized system that implements encryption for its users might be which leads into your fourth point.
Yes javascript client-side encryption is fundamentally flawed in its current form from targeted attacks, but I'm thinking of untargeted and immediate wide spread adoption.
If you're collecting everything from everyone in perpetuity the risk for abuse I see is when someone becomes interesting to the government they can query their data set and use it against them.
I'd think there would be a way to use client side encryption by these companies to protect a user's data until they become 'interesting' at which point they'd have to use more fundamentally secure methods (which is how it currently is anyway).
Seems like the easiest way to get the most people generally protected from abuse.
Is using https enough? If that's the case then it seems like most communication would be protected anyway. I got the impression that this wasn't true - not because I don't trust google, but because of something else (it seemed google genuinely didn't give access to everything yet people made comments about no digital communication being secure: http://www.youtube.com/watch?v=vt9kRLrmrjc).
So yes, it's true that JS crypto provides a higher degree of security than no crypto. And security is always about degrees, not absolutes. But you also have to consider that a user may develop a false sense of security if they're told their data is "encrypted." This does put the user at risk. So if you're going to use JS crypto--which is almost certainly unsound--you take the risk of misleading users in a potentially dangerous way.
To give a concrete example, let's say Google adds JS-based PGP support to Gmail. Suppose that, in general, it works. Inasmuch as Gmail delivers properly encoded PGP messages to your recipients, and it can read PGP messages that are sent to you. But suppose further that Google is somehow compromised. Maybe through technical means, maybe through social engineering, maybe through legal pressure. And then a malicious JS payload is delivered to users, hidden somewhere deep in the page. This payload allows PGP messages to continue being sent and received. But it also backdoors you. Maybe by creating an alternate version of every message encrypted with the attacker's key.
Unfortunately, current clients are not at all equipped to detect if this is happening. For the browser to be able to participate in a truly secure crypto system, it would need to have the most critical parts built in, not provided by websites as JS.
> Is using https enough?
It's generally believed to be adequate for protecting against a man in the middle. It doesn't help you if your computer or the server is compromised. Whether you trust Google or not is your choice. The way I see it, every entity that stores data will eventually have abuse, a leak, or a breach. So if you're at peace with that risk, then HTTPS is enough.
However, it's still a minor pain to use for web-based email. You have to remember to select the entire body, right-click, Services, then select Encrypt. Not sure what can be done other than make it a browser extension, but the history on them isn't exactly stellar, security-wise.
I didn't find it to be a good user experience. The GUI is OK, but when I talk about UX, I'm talking about more than that. I'm talking about what it feels like to visit the product's site for the first time with no clue what it is.
Go to the GPG Tools homepage. It's kind of a mess of links, without a dead-obvious path for the absolute beginner. Should I click "Quickstart tutorial" or "introduction?" Or should I just download the installer, which is my first step for 90% of applications? And the experience doesn't get less confusing when you get past the home page. If anything, it gets more so.
GPG Tools strikes me as a project that is by hackers, for hackers. Nothing wrong with that. But it's very different from what I envision. I want a UX that holds my hand. It should be like a teacher, patiently guiding me through everything I need to know to use GPG.
Fortunately, the mental model for how GPG works isn't actually that complicated. I think most people can understand, for example, what key signing is, if it's explained well.
Apologies to the maintainers of the GPG Tools. Their work is admirable and greatly exceeds the whole lot of nothing I've contributed. I'm hoping this will be interpreted as constructive criticism.
> However, it's still a minor pain to use for web-based email.
I don't see this problem being solved without something implemented in native code. See:
There have been proposals to add a crypto API to browsers, where such API would be implemented in native code. I.e. you could call the API from JS, but the algorithms would all run in native code. I don't know if any of these proposals will go anywhere.
Conceivably, one could also just up and write C modules for popular browsers. But then you'd have to get those accepted by the browser makers.
Either of these solutions is beyond the scope of what I envision, at least for now.
The W3C working group will eventually produce a crypto API standard, though whether that standard will meet the requirements you describe remains to be seen. In particular, it exposes primitives (the proposed API can definitely be called in unsound ways), which a whole lot of people think is a terrible idea but which the standard editor seems bound and determined to ship. It's very frustrating.
That's because W3C's goal in having a cryptography standard isn't security, but rather interoperability; they see encryption as another step towards making the web a first-class application development environment. Without it, they can't get Netflix to run on pure "open" web technology.
It's unfortunate, because we could use a secure browser crypto interface much more than we could use better browser interoperability with random non-web technology. But our industry is, of course, fundamentally unserious about security.
I don't understand why webmail seems to be such a conundrum for everyone. Can't we just write plugins that looks for encrypted messages, and signatures in a page or ajax request and if you have the key in your keychain you get promoted for it. It seems like this would be an even easier plugin then the Mail.app one.
That being said I think the real solution is OS level integration. Perhaps a facebook app to help grandma with web of trust.
There's some stuff about trust of keys. When Alice uses Bob's public key she has to know that it actually is Bob's key.
Looking at other password / certificate mistakes (people use 123456 as a password for important accounts, people click through warnings) I think this might be harder than it seems.
See for example some of the hoax accounts on Twitter. People do get confused, even though it should be easy enough to spot the real account over the hoax account.
Online reputation and trust is important. It's a shame that Klout (totally irrelevant to this) is what most people think of when I say 'online reputation'.
The issue is that plugins must generally be written in JavaScript. It's generally felt that, at least with the current state of technology, in-browser JavaScript interpreters haven't been sufficiently vetted to trust with crypto. For more details on this point: http://www.matasano.com/articles/javascript-cryptography/
> Are we really so javascript/web programming oriented now that "plug-in" connotes javascript?
Yes, afraid so.
> I am talking about an actual application level plugin. Think flash, and silverlight not greasemonky.
This is absolutely an option. It's a tall order, because there are a lot of browser/OS combinations out there. But I believe it could be very successful.
If we someday have an ecosystem of C browser extensions, in-browser crypto may be much more promising. Or not. I suppose there could be other problems besides the language.
Yes, very good point. I'd probably be in favor of having two ways of doing it:
* Some kind of "sync" for people who hate dealing with files.
* Copying a single file by whatever means you see fit, such as a USB stick. Many people, such as myself, prefer the simplicity and transparency of a good old file.
Each of these has potential security pitfalls. Those would have to be thought out.
I'd call out something I mentioned in my other comment in this thread: Enigmail + Thunderbird makes it pretty easy to get PGP up and running. Make an elegant doc on configuring that, put up a refreshing landing page, and you're golden.
It still requires users to understand that they have a public and a private key, and just one of them, and a "key ring" to which they add their counterparties public keys, and that those keys themselves have to be authenticated and "signed" if the system is to be secure.
For me at least, this isn't the pain point. Granted, I'm an engineer, but all I've got for data is my own mind, so here's my anecdotal evidence.
The concept of keypairs doesn't seem hard to me. And I think that can be abstracted away a bit anyway. You just need to know that there's this super-secret file (the private key) that you should never leak to anyone, and you need to sync it to your devices. So far, not so bad. As for the public key, I think the software can mostly just handle that for you. I.e. it can take care of uploading it to keyservers.
Signatures might be harder for people to understand. But here still, a good UI could help to abstract that away a bit. Imagine I can just click "get my key signed," enter an email address, and that's it on my end. No more steps for me to take. On my friend's side, it's just an email that comes in, probably with a link using the application's special protocol. My friend clicks the link, her PGP UI boots, and a yes/no pops up. Done.
So I don't think that understanding the mental model is the bottleneck right now. Rather, I think it's that the software and the accompanying documentation are not optimized for getting a naive user off the ground as fast as possible (without compromising security).
I've been using Enigmail for about a decade, and PGP longer than that, and I still think it's a pain in the ass. Whenever I'm configuring a new mail client, I have to fiddle with multiple settings just to get it to send encrypted mail.
There's a real opportunity to build something much, much simpler on top of PGP. All you really have to do is pick some sensible defaults and automate a few steps. Look at how many nerds can't be bothered with encrypted communication, let alone normal people.
Making GPG and programs that use it easier to use would be nice. But the primary issue is that many people that should be able to comprehend PKI basics and necessary background material to use it, are too "busy" or "lazy" to spend the limited time to even scratch the surface.
I face this daily, since I'm the go-to guy at my office for scripting/coding solutions for these folks. They just want it to work without having to learn or understand their decisions. And these are the same people that will spend hours figuring out complex lunch accounting issues or read volumes on video game strategies or rebuild engines.
You can do all that stuff, but it's the fact that you have to understand these concepts to use PGP that makes it difficult, not the way they're documented.
If you don't understand key management and public/private keys, signing, etc., you either have to depend on someone that does, or cargo-cult your way through using it. And it's not difficult given a little "want to". Alton Brown could teach the concepts in a single show.
I understand this stuff, and I believe I can teach it to my friends. But I don't believe I can convince my friends to put up with the software that is currently available. That is the entire reason I can't find anyone to use PGP with.
S/MIME suffers from the same problem as SSL/TLS: everyone puts their trust in CA's, and CA's regularly get hacked, tricked, controlled by governments, etc. etc. It does not matter that you created your own private key if someone else can create their own private key too and have that signed by a bad (but trusted-by-everyone) CA.
How many regular users do you know who actually edit their list of trusted CA's in their browsers? (I sure don't, though I probably should.) Who would manually remove DigiNotar immediately because they heard on the news they got hacked? No, Big Well-Designed Site is signed by Big Company, user trusts it.
On the other hand, if I give you a key that's signed by someone you trust, you can make an informed decision on whether to trust my key. It is a decision on a level where the regular user might feel they have something to say (whereas a regular user is not likely to feel they know more about security than Big Company).
Perhaps most users would have very few keys that they trust/verify. But I'd say that's a good thing, because if you haven't gotten real verification, it's just a false sense of security.
Do you mean we should build a good UX around S/MIME? It seems clear to me that none currently exists.
I'd be curious to hear from security specialists about S/MIME. How thoroughly studied is it? How are the libraries? I have hardly ever heard it discussed, so I'm a little hesitant at the moment.
I've thought some more about S/MIME, and right now my biggest concern is the CAs. I don't like having that central point of trust/failure.
Do you know if S/MIME can work on a distributed model?
Also, what are the advantages of S/MIME over PGP? I hear what you're saying about enterprise adoption, but I'm more concerned with the thoroughness of peer review than usage rates.
Not necessarily true. There are certainly CAs that are able to keep their private keys out of the hands of most governments. But there is definite uncertainty about who to trust. For the truly cautious, wouldn't it make sense to explore setting up your own CA? Something like OpenCA or TinyCA should do the trick.
You're missing the point. If the UI says "yes, that's 23david" if I can get any CA to certify that, the security of the system is no better than that of the weakest CA. Sure, your CA may be perfect, but why would the attacker go for the strongest point?
So perhaps that's an issue with the UI not clearly showing which CA is verifying the identity, and alerting you clearly if an encrypted email is using a different CA than prior ones.
Depending on the client you're using, it shouldnt be too hard to prune the trusted CA list to only include providers you choose to trust. If you want, only include your CA and remove all others.
Probably makes sense to start deciding which CAs we should or shouldn't trust? Has anyone reliable done any work on rating or evaluating the trustworthiness/security of different CAs?
I hear this, but I have absolutely zero idea how to get s/mime working and I'm faintly aware it might involve having to buy a digital certificate off someone (and doesn't that mean that someone could decrypt my email anyway? if they're the ones who generate the certificate?)
What I'm trying to say here is: I'm a bit of a geek, and if I don't understand how it works, there's no way e.g. my parents are. If s/mime is the solution, there's a serious education battle that needs to be fought.
I say this as someone who uses GPG (via GPG Tools on OS X) without any bother.
No, you generate your own, then have the public key signed by a CA having proven your identity to some greater or lesser degree, depending on the level of certification - but generating your own and having it signed is not a straightforward process in my (limited) experience.
The problem with that method is the recipient of your mail is still relying on that CA to validate your public key. The CA could (willingly or under duress) sign some other public key and claim it's yours, then use that key to impersonate you, and even trick recipients into using that public key to encrypt emails intended for you. That would form the basis of a man-in-the-middle attack.
It's unlikely to work if you've already been communicating using the real public key (depending on how the software handles new keys), but for new recipients in particular, it's possible.
that's possible with PGP, too, if you don't verificate key in person. E.g. "hey Alice, I lost my passphrase, please use the attached key or ID xxxx on one of the keyservers"
It's even easier to do, because you don't have to trick a CA in creating a duplicate key.
> It's even easier to do, because you don't have to trick a CA in creating a duplicate key.
In some ways it's easier with PGP, yes.
But in some ways relying on a possibly-hostile CA is worse: if the software doesn't really give the user any visibility of key changes, then the impersonator won't even need to social-engineer the recipient with "whoops I lost my key". Instead, the duped recipient will just see "Signed by Big Trusted CA" with a shiny green padlock, and will think everything is fine, even though the key under the hood has changed.
Yes, that's a step in the right direction. (In terms of UX. I won't make any claims either way about its security.) In any case, I'd definitely want a solution that works on the desktop. I don't know if iPGP integrates with keyservers, but that would also be an important feature.
The barriers to modern cryptography seem to be far more social and psychological than technical.
It seems as though many of the web-of-trust issues that impeded PGP 15+ years ago could be helped by current day social networking practices, if a social network pushed it. PGP/GPG could be used under the hood, as long as the user never has to deal with an actual file anywhere unless they wanted to.
The consequences of evil twin attacks [1] may be worse, but if the 'verify' action was not as casual as mere friending, then perhaps it would be less susceptible.
I've been thinking about the possibility of doing it under the hood via a browser extension on major social networks. Something akin to 1) you publish a photograph of yourself to Facebook that contains your PGP key in EXIF. 2) Your friends, who can see that photograph, encrypt messages for all friends with "public key" photographs. Finally, 3) The browser extension seamlessly decodes all PGP messages through page manipulation (e.g. walking all text nodes and looking for a specific sentinel, and then decrypting all messages that match the sentinel). This way, you would be able to communicate securely over a social network with nothing but a browser extension.
I have a very rudimentary prototype up on Github if anyone is interested. It has some throw away keys and allows you to encrypt for those via right-clicking text in a textarea. The code uses OpenPGP.js.
Great idea. It has the potential to spread virally if those who don't have the extension installed are shown a message telling them the benefits of installing it.
A very small (but rather important, depending on occasion/readership) detail:
[...]
gpg --encrypt --sign --armor -r recipient@email -r your@email.com filename
[...]
-r recipient Specifies recipients of the message. You must already have private keys of the people listed. [...]
The author probably meant, "You must already have public keys of the people listed," not private. (Probably just a typo-level thing, they doubtlessly know what they are talking about.)
Glad somebody else pointed this out. I read that and thought it should have said "public" too, but the rest of the article was so strong that I started doubting myself and felt like I was going crazy.
* Get Thunderbird and Enigmail.
* Use Enigmail to generate your keypair
* Upload your public key to the keyserver (via the GUI)
* Proceed to use email.
BAM! Done a hell of a lot easier than this tutorial.
Scary bit is it's not server on HTTPS, which is probably a must-have for sites that publish public-key information. Much easier to MITM attack the site and claim to be posting "his" public key and email address while really publishing your own info, etc.
A great tutorial, however. Very accessible in my opinion and considering it's purpose my previous paragraph is more of an aside.
That's the purpose of key signing. The author--like almost all PGP users--has gotten his key signed by third parties. This means that its integrity can be verified. E.g., if a man in the middle were to intercept the HTTP response and change the contents of the key, it would lack the signatures.
Still, I suppose it's possible for an adversary to work around this as well. If you can find enough people who are 1) willing to falsely sign a key, and 2) trusted by others, you can have these people sign a spoofed key. But then these people would be putting their reputations on the line, and the probability of being exposed is high. Thus the cost of the attack is high.
The lesson being: If you're emailing info that is valuable enough to warrant such a costly attack, verify the key through some other means. Meet the message recipient in person, for example. And consider a thorough security audit of everything in your digital and physical life. You're obviously operating in a far more dangerous world than I do. There are probably many vulnerabilities available to attackers that have nothing to do with your email.
My warning was truly an aside, and given the nature of a large group of visitors, of course a handful might not follow best practices and verify the signatures, etc.
Ah, good point. I see what you mean--if someone is just learning about PGP the first time, they might not know about issues surrounding key integrity, and the need for trusted 3rd-party signatures.
Does it actually? I think it shows why proverbial grandmothers aren't using it, but what this describes is well within the grasp of technical users (among whom adoption is also poor).
Well, I read an Ask HN thread the other day about a guy using PGP, and what you have to do and give up to have secure email is just not worth it. Not for me, at least.
Some misinformed info in there about not starting your messages with obvious niceties "Dear fred" or including the same text at the bottom of all your emails (your email signature). If the crypto is good, this doesn't matter - it's not deterministic encryption. If it can't pass a known plaintext attack, you shouldn't be using it. GPG does.
I have a serious beef with this. The main problem is you need a tutorial like this. As long as it is as convoluted to set up and operate like this, end to end security and privacy will only be for the few of us.
This should be ubiquitous, and easy to set up for everyone, which it isn't, nor are any of the numerous Outlook plugins either.
The tutorial makes no mention of sub-keys. I thought using sub-keys was a generally accepted good practice? The use-case being that if the sub-key is compromised, you can invalidate it and issue a new one -- and others who trust the root don't need to update much.
Is that still the case? Was it ever? I don't know enough about PGP to know, unfortunately.
Yes, using subkeys and rotating them is good practice. But really the hardest problem with pgp is getting people to use it in the first place, so let's not focus on subkey use, or upgrading from sha1 to sha256 (or better), or key length (the author uses 1024 bits only).
Though I'm not sure why the author focuses on non-threats like known plain text attacks, which gpg isn't vulnerable to, and not these issues.
I have played with GPG time and again, my Enigmail/Thunderbird/OpenPGP card setup is fully functional.
But what's holding me back is webmail.
I don't use the web interface often, but it has proven to be absolutely crucial to be able to get some important mail (boarding pass, mail explaining how to get somewhere etc.) from any computer.
you need a cross-platform USB stick program, with all your secrets on it encrypted properly, for that kind of thing. (The simplest hack I can think of would probably be python binaries, with a local-webserver based interface.)
I don't know of such an application, or whether the approach is rigorous, I'm afraid. But I think that's the shape of the solution.
First, I'm not remotely interested in running my own mail infrastructur anymore. Been there, done that. Today it's much too hard to get mails accepted by others.
But more important: iPads don't have an USB connector, my mobile phone doesn't have one. Friends have Macs, in other places there might be other crippled devices.
The web is a universal building block. USB sticks are not.
Sorry, I wasn't clear. I use last pass to transfer my private key to the PGP app on my iPad (and elsewhere) (as opposed going through Dropbox or whatever).
Relies upon trusting last pass and trusting the iPad of course, both of which are questionable.
Why rely on the (possibly compromised) OS of the host computer's hard drive when you can boot your own OS straight from the USB itself? What you are looking for is tails (https://tails.boum.org/) with a luks encrypted persistent partition.
I'm not saying I'd be the one to implement this, but at the vert least, I'd like to start collecting ideas. Maybe I or someone else could realize them eventually. So let's talk. Please post your thoughts on what would make for a good, user-friendly, and secure wrapper around GPG. Thoughts from security specialists would be especially appreciated.
I'll get the ball rolling with a few basic requirements:
* No roll-your-own crypto. Absolutely none. All algorithms must be provided by a mature, universally trusted library. (And those algorithms must of course be GPG, since that's the whole point of the project.)
* Don't use any libraries that, while sound, expose a low-level API such that we could unwittingly call the API in unsound ways. An example of this would be OpenSSL. (Just an example; obviously OpenSSL != GPG.) See this for a discussion of the library misuse problem: https://news.ycombinator.com/item?id=4779015
* Users should have to understand as little as possible about the inner workings of PGP/GPG. However, in any instance where hiding details would compromise security, details must not be hidden. For example, people need to understand the implications of signing someone's key. We don't hide that part from them. But they shouldn't have to fiddle with text files and command lines. We do hide that part.
* A "good user experience" is more than just a GUI. We already have GPG GUIs. User experience doesn't start when the user first boots the program. It starts at the moment a person first hears about GPG and wants to learn more. Thus, good UX is as much about documentation (including the product homepage) as it is about software.