Hacker News new | past | comments | ask | show | jobs | submit login
How to support PGP encryption in Gmail (conorpp.com)
113 points by conorpp on Aug 26, 2015 | hide | past | favorite | 49 comments



It's worth mentioning Mailvelope ( https://www.mailvelope.com/ )

- Free - Supported in FF / Chrome - Supports Gmail, Yahoo, Outlook and GMX

I've been using it extensively for about 12 months now. It's solid, unobtrustive, and just .. works.

Decryption of attachments would be nice, but it's definitely not a deal breaker.


FWIW Mailvelope also works fine with Fastmail in my experience.


Signing of encrypted messages is not supported at the moment. Unfortunately that's a deal breaker for me :-( But it does look very nice.


Plain vanilla signing obviously works, and you could (and can, according to a quick check) - just paste your encrypted message back into the window and sign it (unless I grossly mis-understand signing, which I might do..)


At the moment, end-to-end is NOT production ready, and will likely undergo further hardening in the coming months. Use at your own risk.


I agree that it's not yet ready for general use, but what hardening do you expect to happen in the coming months?

I'm one of the original end-to-end authors, but haven't worked on it recently.


I'm an developer on E2E team as well and can confirm that there's no 'hardening' going on. E2E is, to the best of our knowledge and we have expressed what that exactly means in our threat model: https://github.com/google/end-to-end/wiki/Threat-model. E2E is under Google VRP (https://www.google.ch/about/appsecurity/reward-program/), so if you're aware of any vulnerabilities, let us know.

E2E extension is not production ready, but I myself am using the compiled version as it is, in my biased opinion, the most secure of existing PGP-in-the-browser extensions.


> end-to-end is NOT production ready

This sounds wrong, given that "end-to-end encryption" as a concept is very much production ready.


For better or worse, "End-To-End" is the specific Google Chrome extension being discussed in the article: https://github.com/google/end-to-end/

Proper capitalization would help; Google choosing a less overloaded name in the first place would have helped more.


Agreed. I was refering to the ambiguous name of the project.

We're going to have a very hard time talking about end-to-end crypto to consumers without at least one person confusing this with googles addon. (Thanks google)


What is so terrible for people about using an email client? I find using the Gmail web interface to be frustrating because they've removed the ability to pop out the compose view into it's own window, so I can't easily reference information in another browser tab while writing an email for instance. Meanwhile, Gmail has excellent IMAP support, which lets me use Thunderbird + Enigmail to get excellent PGP support.


Nothing is terrible, I suppose, but in the web client I personally enjoy the fast search, filtering, labeling, split pane smart inboxes, integration with Calendar, modal keyboard shortcuts (they aren't operative when you're typing in a text box but are active otherwise) and non-intrusive threading (conversation-view).

The fact that all this is the same on my Macbook Pro, on my Arch Linux desktop, or when I reboot to Windows, without any effort on my part to synchronize settings makes this all rather excellent.


Same here. I use at least 2 PCs as well as mobile Systems. I love to have the same data everywhere without having to sync something.

That beeing said, I wouldn't mind running my own webmailer as long as it has most of the google apps/office 365 features.

I'm hoping for mailpile myself even if it still takes some time.


Except for the split inboxes, I have all of that in Thunderbird with lightning.


Nothing. Except Thunderbird+Enigmail is not something that is going to appeal to the masses. Web-based mail came along and all of a sudden it didn't matter where you were in the world, you could log in and check your email from any computer.

Encryption means nothing unless both parties are using using it (obviously), and if it's only a few users who are encrypting emails using Thunderbird+Enigmail then there's that "why are you encrypting, you must have something to hide" and so the theory goes that you are more likely to catch the eye of big brother who then keeps a closer eye on you... if everyone is encrypting everything then we are all on the same footing.

Unfortunately things need to be made easy for the masses to adopt, like the way iMessages works, which is seemless. I don't want to worry about which pgp server upload my key, or having to go to 10 different places to revoke my key.

BUT, I don't trust any web-based encryption either since it's liable to be tampered with: https://en.wikipedia.org/wiki/Hushmail

Thunderbird+Enigmail is the safer way to go in general (and is actually what i use also).


You still can. You have to shift + click or ctrl + click the Compose button. Or if the compose windows is already open you can do the same with its maximize icon.


I use the gmail priority inbox and have not seen a single mail client that is able to replicate that. It's incredibly powerful, showing me the emails first that are labeled as important and are unread, then the already read and starred emails, then everything else. This vertical split into three sections is quite important to its power. Combined with some data crunching algorithms that learn what is important to me it's very easy to keep working on only the important things, even though you might get flooded. I'm so dependend on it that Thunderbird simply doesn't cut it. The only reason I use Thunderbird is for these people who really,really want to encrypt mails. Otherwise it doesn't get even a little close to what I want. Do you know if it's possible to get this in Thunderbird?

PS: Not even the Android Gmail client can replicate that. It can show my important labeled mails, but also shows the already read ones and doesn't have that split.


You can shift-click the "full screen" button on the compose window to pop it out into a separate window. There is a tool tip that appears if you hover over it.


My main issue is that search and labels just don't work as well as in gmail webapp.

Also, I'm sending 90% of emails from my phone anyway and I don't know of any good client that works both with gmail (with transparent caching of recent/viewed messages) and gpg.


Did you try K-9 mail with APG?


No, but I will do it now.


"Meanwhile, Gmail has excellent IMAP support"

HAHAHAHAHA good one


At the moment, I use the Mailvelope extension.

https://www.mailvelope.com/

Excellent in Chrome - a bit slow in Firefox.


There is WebPG too:

https://www.webpg.org/

It wraps around your existing gnupg installation.


Mailvelope is wonderful.


FAQ is a bit confusing. They say there's only one keyring, but at https://github.com/google/end-to-end/wiki/Keyring they admit it was not a great idea and that they're splitting the responsibilities.

Based on the last planned implementation (External Key Manager (GnuPG bridge, other hardware, network oracle etc..)), I hope it will "just work" with hardware keys.


One of the developers here:

Yes, the Keyring reimplementation is in progress and ends very soon. After the redesign, applications built on top of E2E library will be able to use different sources of both public and private keys (so it's easy to do integrations with GnuPG, hardware keys HKP, or e.g. Facebook).

The API will be similar to what's in https://github.com/google/end-to-end/wiki/Keyring.


One thing that is currently missing from E2E (as far as I can tell having played with it a little in the last month) is any kind of web of trust. When I import a key, I can't tell if it has been signed by me or someone I trust. Is this on the radar for the UI after the Keyring reimplementation is finished?

At the moment, what we've suggested at our work is that people manage keys in GPG and then only export keys into E2E if they trust them. But it would be nice be able to do those kinds of things in E2E (or at least be able to tell if a key was signed by me).

BTW, thanks for working on this!


See https://github.com/google/end-to-end/wiki/Key-Distribution. In short, we don't invest much into WoT.


After giving that a read, I'm happy that there are people way smarter than me working on these problems. Kudos to your efforts!


The fact that you lose your draft when the window loses focus kills it for me.


There's no way I'd share my private key with whatever extensions there are. It just doesn't feel right.


Good to see that E2E has been progressing, but I still wish we had a PFS alternative to PGP.

With today's hacks and the total state of surveillance in which we are in, it's a little crazy to expect people to use the same key 10+ years without it getting compromised. Even a year seems too much.


Wait, you have to install a JDK to use this chrome extension?


No, to build the chrome extension from source.


oh, thanks for clarifying.


Does it use inline PGP or PGP/MIME?

(On an unrelated note, I wonder if there's anything on mobile that supports PGP/MIME. K-9 Mail only knows about inline.)


I tried to use gnupg but can't wrap my head around the web of trust. My main concern of WoT: What my signature of an other guy's public key actually means? My takeaway is that there is an implicit statement that you sign, but I don't really know what it is. According to the gnupg privacy handbook[1] this statement is roughly "I trust that this guy can properly sign other stuff" which is kind of recursive, but I like it.

But when you validate if a message is really written buy a guy named John Doe then the trust path doesn't actually verify this. The trust path verifies that this guy who claims to be John Doe is good at signing other guy's keys. I don't care about that at this point. Of course people don't actually think of the statement written in the privacy handbook, they implicitly sign the statement "I trust that this guy can properly trust other stuff and I know that this guy's name is what he currently has as an ID.". There are several problems with this in practice:

- Now you are signing two statements with one signature and you can't separate the two. Now using anonymous public keys becomes tricky as you lose one half of your statement.

- Signing the id happens at the wrong place. Malory can revoke her identity than push "Barack Obama" as her new id, now she can send messages in the name of the president. Of course in practice it's hard, because you can't delete revoked ids from key servers. But at this point you trust the key servers. I thought that key servers are not trusted part of PGP.

- gnupg guys advocate key signing parties: you gather at one place, bring your ID then sign each other's keys. The problem with this that you only verify half of your statement (id). The other half is tricky to verify (Can this guy properly protect his priv key? Will this guy just randomly sign everything he sees?), I think it makes more sense to trust a friend who you already know well. Of course there are trust levels and I think you should only use marginal trust at these parties, however I don't know what's the practice.

- There are keys that are not tied to people, they are typically tied to software packages. Now what does it mean when someone signs such a key? Take putty as an example. Its master key is signed by several people. Each signature could mean that "this is the putty project's master key that is used to sign the binaries themselves" which can be verified by the given guy knowing the developer and that he is trustworthy (at this point PGP is misused though, the guy should have signed the developer's key instead and only the developer should sign the binary signing key, but it would make trust paths longer). It could also mean that "I trust that PuTTY is a great software and doesn't do anything nasty behind your back" which requires an entirely different verification.

In the end the trust path a way too simplified projection of these statements and most likely you can't actually verify the statement that you care about.

[1] https://www.gnupg.org/gph/en/manual.html#AEN282


- re. anonymous keys. There is no need for them to be part of the web of trust. If you really want them there it is then a common requirements for signatories to require at least a verifiable e-mail address bound to the person they know and are authenticating. But then arguably it's not very anonymous any more.

- re. multiple identities. cross-Signatures in the web of trust are done on individual identities, not the master key! So if you sign an identity and the key owner then creates another, that new identity will not carry your signature.

- re. key parties: so far it wasn't an issue because it was implicitly assumed that people attending a key parting were savvy enough to understand how to sign. For PGP to be democratized the concept of key party needs to evolve. I personally combine them with a small lecture on PGP use, and say "you get to sign each other only if you have attended the lecture including its small practical".

- re. software keys. These are typically considered as an extension to the developers' keys. You as a user shouldn't sign those.


> it was implicitly assumed that people attending a key parting were savvy enough to understand how to sign

It's not only about being savvy though. You trust the person to not misuse his/her key unknowingly or knowingly. If you only use marginal trust on a key signing party then it can be mitigated somewhat though. What's the usual trust level used on such a party?

Edit:

Then the Putty master key is misused according to you: https://pgp.mit.edu/pks/lookup?op=vindex&search=0x4F5E6DF56A...


I don't think pgp ever can provide validation that some person owns some email address. You're signing the uid/address on a key which people prove they control. You can ask them to send you a signed message from that address to prove it, but if they owned someone's key, they likely have access to the email too. Even if they don't emails are still very easy to spoof.

Regarding changing the uid, your scenario cannot happen (if I understand it correctly): you sign uids, not the keys themselves. That means you can sign specific uids. If they delete it and add another uid, it's not signed anymore. (also you can revoke signatures if someone starts misbehaving)


I really don't mean the email address but the real name. People normally verify real names when they sign someone else's key anyway. Of course there is the problem that real names are often not unique.

Email address spoofing is too easy, I don't think that "owning" an email address is reasonably verifiable.

I wasn't aware that people sign each others uid. What happens if someone's name legally changes? Does he/she have to rebuild his/her WoT? Of course you can send out an email from your old id that you name is changing.


"Signing a key" really means signing the binding of a uid to a public/private key pair. I had to look at the Open PGP spec to figure that out ;-)

The question is, how to do verify that a particular person is exclusive owner of a uid? Because the uid is just a piece of text, there is no general way that will work every time. It depends on what the uid is. Someone could put their Passport number in the uid and you could check the passport to verify that they own it.

Of course most people use an email address as a uid. There are some key exchange protocols that will give you a pretty good idea that someone controls an email address (although they may not be exclusive owner). Basically you email them with some information and then they email you back with an encrypted version of that information. While they can spoof sending the email, they can't easily receive the original information to encrypt it.

Generally speaking, online, email uids are what is most useful. You need to know that the person who sent the message has access to the email address in the uid. But if you are actually concerned about the identity of the person, passport numbers, etc are better uids.

A key can actually have many uids. You can add new ones at any time. But if you do, you need to get people to sign the new uids (because while they may trust that you control one uid, they still need to verify that you actually control the other uid).

How you decide that the uid is owned by the owner of the key is up to you and you are free to sign or not sign any key. The web of trust comes where someone else has signed the uid on a key. There is a kind of second order level of trust with that signature. If my buddy Fred has signed the uid, and I am absolutely sure that Fred will never sign a uid without making sure that it is owned by the owner of the key, then I will probably trust it. But if my buddy Carl signs it, I might think "Carl is not diligent enough to check it out", so I might not trust it as much. How you assign trust of third parties to sign appropriately is up to you (and you do it for each of the third party keys that may have signed something).

I hope that helps!


> But if you are actually concerned about the identity of the person, passport numbers, etc are better uids.

Why not just name + email? I don't think putting anything like a passport number is a wise idea. It's just putting one more semi-secret information on the internet.

I think the encryption and signing+trust gets mixed here. If I know someone only by email address, I don't trust their key very much. But I'm still going to use it to encrypt messages to them, because it's better than nothing. I'm going to trust it if it's got a track record of reasonable messages on mailing lists or git commits over some period of time.

But back to the main topic: email-only, and passport number are two extremes and the second one is even hard to verify. Most people actually just use name+email combination in uids. And that's what gpg invites you to do when generating keys. Why didn't you mention it?


The point is that the uid is just a string. You can put anything you want in it. Most people put their name + email because that's what is useful for most internet transactions. If you get an email from someone, you want to be able to verify that it is from the person who controls that email account.

For example, I work in Japan. My colleagues work in the UK. Generally speaking I know who they are because I work with them every day. However, I have never checked their real life credentials, because it doesn't matter to me. When I receive an email from them, all I am interested in is "Is this Joe Blow that I work with from the UK". I don't care if Joe Blow is their real name, or if their carefully cultivated story of growing up in Shouthampton and going to the London School of Economics is true (as opposed to growing up as an indentured clown in a travelling circus and escaping at the age of 14 on the back of an elephant). When I receive an email from them, none of that matters. I just care if it's the same person I worked with.

So most of the time people put their email address and a name (which may or may not be fictional). When I sign that uid against a key I am saying, "I verify that this person controls this email address and is the person I know as Joe Blow".

If you actually care about whether the person is who they say they are, then the uid is going to have to contain something more identifiable/verifiable. So for instance, let's suppose I'm a bank. I want to verify that orders coming from my customers are really from that person. It is important that it is really that person because there are laws about identifying where bank transactions originate. I don't care which email addresses they control. I don't care if they are spoofing their email address. What I care about is that any messages I receive from them are really from them (and that any encrypted messages I send to them is really only readable by them).

So a passport number would be a good solution. The person walks into a bank with a USB key containing their public key with a passport number uid on it. The bank physically checks their passport to see that it matches and that it does, indeed, belong to the person who claims that it does. It then signs that uid against the key and locks it away somewhere. The key never has to be on the internet. Why would it? Nobody on the internet cares who you really are (apart from the NSA, I guess). They only care that you are the person who owns a particular email address.

I agree that there is a lot of confusion over encryption, signing, trust and identity verification. To a large part, I blame the convoluted nature of Open PGP and the existing documentation which conflates many, many issues.


I don't understand why do you want to sign that UK person's key in the first place. You can't verify who they are and only communicate online. Ok - that sounds to me like you shouldn't announce "I trust this is a real name/email" in that case. You can trust it privately and still use the key anyway.

But nobody is forcing you to sign the key, or even then to publish that information on the internet. As you say - how do you know they're that person?

(I'm ignoring the part of - should we even care that the identity matches. You communicate with the (person, key) tuple - you can still trust that tuple and not care if the name matches government documents)

A bank is a slightly different case. Your key for communication with the bank is (usually) more interesting than the key you use for emails. The bank also has both interest in protecting communication and means to do something better than key signing. Some banks will actually issue you a smartcard (or some equivalent) from which you can't export the private key at all - it may even be a gpg key. The point is - that's a much better protection on average, because an average bank customer has no idea about gpg, but will likely not give the smartcard away to someone random. Finally, few people have passports and you shouldn't be required to have a passport to have a bank account.


You're correct, and I think this is part of the growing disgruntling about PGP.

The most successful deployment, Debian, is a closed system: while Debian developers' keys and signatures are on the standard public keyservers, for Debian membership and archive authentication purposes, only signatures from existing Debian members count, and only keys explicitly pushed to Debian servers are usable. So it is possible for Debian to demand a specific meaning for signatures (which, possibly surprisingly, is "I have verified that the government associates this name with the holder of this private key": https://lists.debian.org/debian-devel/2009/06/msg00787.html), and it can hope that its members use a compatible definition when signing other members' or prospective members' keys. But Debian does not read trustworthiness, either to sign other keys or to generally be a good person, in PGP signatures. That's set by being / becoming a member of the project, which is no one's individual decision.

Leaving aside trust, keysigning parties have increasingly been sounding to me like an invitation to show up with fake ID and find some people who've never met you before. Possibly while everyone is drinking. They're also sort of a weird habit for a privacy-loving crowd, since they involve publishing non-deniable records of who met who, but hey.

To be fair, none of this means that the PGP protocol is bad, or even the GnuPG software is bad (although GnuPG sometimes seems like it wants to make the web of trust even murkier than it is). It just means that the public web of trust is useless, and you either need direct key exchange with those with whom you want to communicate, or some sort of organization that you trust to verify keys for the purpose at hand. For instance, if you're doing email within a company for company purposes, letting the company track keys is probably totally fine.


Huh, I didn't know about the Debian rules. But Debian keys and signatures are published to public keyservers too, aren't they? So your trust paths that contain a Debian developer may not guarantee the trustworthiness of the endpoint.

I thought that bootstrapping my trust from one of the Debian developers would make sense, since I already have Debian installed (somewhere you have to start). Looks like it's not that a good idea.


Yeah, a trust path that is end-to-end all Debian developers is fine. And I sometimes know that a certain human (that I don't have a way to meet with in person) is a reasonable human being, so I'll trust a path to that human that consists of Debian people, or other people I personally know to be reasonable human beings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: