Hacker News new | past | comments | ask | show | jobs | submit login
I'm giving up on PGP (filippo.io)
745 points by FiloSottile on Dec 6, 2016 | hide | past | favorite | 338 comments



I find very interesting the point about the split between what WoT was supposed to be, in theory, and what little it represents, in practice, in terms of practices about key verification.

It has been said many times that the lack of adoption of pgp in mail was due to the average user not being able to grasp the concepts behind the proper operation for key management, but the article points to common practices among "power users" that will drop the theoretical best practices and switch to fallback, unsecure modes, given the effort needed to properly verify a key binding. If the community that cares about encryption and privacy is not able to routinely verify keys, the whole system definitely has a weak link.

I wonder if pgp is fundamentally flawed, or we have a deep conceptual usability issue here.

And to me, assuming that the most usable thing we can use instead is something that relies on mobile phone identifiers, more often than not tied to a phisical world identity, is really something to worry about.


> I wonder if pgp is fundamentally flawed, or we have a deep conceptual usability issue here.

I don't think the "WoT" is conceptually flawed, and frankly, the argument that "people of average intelligence" can't grasp the concept comes from a very high horse and is also untrue. It's simply that any and all software for PGP utterly fails in the UX and functionality department when it comes to key management.

Web of Trust implies such a glaringly obvious visual metaphor that I am truly in awe that not a single program works that way.

Tabulations of keys are not a WoT, period.

I don't verify keys one-by-one, that's bullshit. I get one good key that's part of a WoT, and then go from there, and can easily see from the web structure that other keys are good and what their relations are. None of that is accomplished by any PGP frontend.

Instead I get stupid and unhelpful error messages ("no key available" - I just downloaded it!) and some of the most terrible crypto UI I've seen ("How much do you trust this key? [ ] Not at all [ ] A bit [ ] Fully [ ] Totally" - w-t-f).

A technical criticism of PGP/GPG is of course also possible. The whole thing is a museum of early 1990s crypto, with default ciphers like CAST5 and messages not being authenticated - and even if the message is authenticated most parts of the PGP protocol are not, meaning that you got that big bunch of C code maintained by that one German guy over there that parses unauthenticated bytes that you shipped through half the internet with a big neon-red sticker on it saying "I'M PGP PLEASE TAMPER WITH ME".


I worked in IT for an engineering company that required all external emails to be PGP encrypted. Despite all engineers having Symantec PGP software installed and setup, training, and support of IT, they would often ignore this policy. The excuse, often valid, was it would require IT from both companies to setup the encrypted keys for the first time for new users. If the system is too complex for engineers, the idea of this being usable for the average users is a pipe dream.

PGP needs to be as simple as the SSL Lock in a browser if there is to ever be any hope of widespread adoption. There needs to be a single system of trusted PGP universal key servers that allow the details of key generation/management to be hidden from the user, just like SSL is with the web browser.

We'd still be using HTTP if SSL was as complex as PGP. Same ssh replacing telnet. E-mail's lack of progress on this front is primarily a UX issue, and until it's solved PGP will remain a tool for the select few.


99% of crypto would work just fine if you appended an OTR-like protocol over the top of email.

First email is "hey we're interested in blah..." and is sent in the clear. Then have the message window change color as subsequent emails get the protocol more secured.


I hate color coding. I'm in the 8-12% of men that have red-green deficient vision. You can use 10% as a rule of thumb. If I'm not mistaken in my probability math, that means in a group of 5 men, there is a 50% chance one of them is "color blind."

Yet the world insists on using red/green as bad/good indicators. Drives me nuts.


You are mistaken. If probability of each of the 5 men being colorblind is independent (so eg. they're not related etc), then there's a 41% chance that at least one of them is (1 - 0.9⁵).

(There's a 33% chance that exactly one of them is: 0.1 × 0.9⁴ × ⁵C₁).


Thanks. It's been too long since I've studied statistics. Might be time to watch some khan academy.

Oh, I see. Assuming probability that a man is not colorblind is 0.9, then you found the probability of none of them being colorblind and subtracted that amount from 1 to find the other side.

For anyone reading this that was bothered by my use of a specific gender, it's because colorblindness occurrence is significantly higher in men than women.


That's not soo much lower, though.


I think a Poison distribution would be more appropriate here, so the probability of 1 man in 5 being colorblind, assuming 10% of the population on average is colorblind, is:

    e^-0.5*sum((i)->(0.5^i/factorial(i)), 1:5) = ~39%


> Poison distribution

Why? This is basic statistics (% of population).


I don't believe so - it's n instances of a bi-valued random variable, IID. It's exactly the case the binomial distribution covers.


Exactly. You have five chances at an event, and an event probability per chance, which is exactly what the binomial distribution is for.

The Poisson distribution is what you use when the effective number of chances is "large". The Poisson distribution is effectively a special case of the binomial distribution, where the number of chances is infinity, the probability per chance is infinitesimal, and the product of the two is the expected number of events.


I also have red/green colorblindness, but it's not as if color coding would have an 8-12% failure rate. Typical red/green color-indistinguishability mostly applies to dark reds and green, though, whereas the reds/greens used in UI tend to be quite bright and vivid.

People who can't distinguish vivid reds/greens are much rarer. (If 10% of men couldn't separate red from green at the stoplight, we'd have noticed a long time ago.)

I agree we should include everyone, of course, which is why there should also be patterns, shapes, and sound to assist, I'm just saying color coding is not some collossal mistake.

(Also, your math is off if you're assuming a 10% base rate. To have at least a 50% chance of 1 person in a group of men being color blind, you need 7 men. 1-(.9^7)= 52%.)


>Yet the world insists on using red/green as bad/good indicators. Drives me nuts.

Well, it has to use something, and other people would be colorblind in other colors, plus some will be blind too.

In this case, one would expect there'd be some OS-wide color utility to alter colors to the ones the user can discern.


Yes, but these other ones are far more rare. Also there are shades that make it so much assessable for people. Almost every day I will ask someone what color something is because of poor selection. Chances are you have a color blind person in you office, just run visualizations by them really quick.


> Chances are you have a color blind person in you office, just run visualizations by them really quick.

This is a great check, but I also recommend trying things just in greyscale as a simple test and installing something like Color Oracle [0]. Also, most of the problems can be solved by looking for an already existing solution, like swapping your heatmap colour scales for viridis [1].

The most important thing is recognising that these kinds of issues exist and pro-actively looking for good current solutions (the same applies for things like trying to ensure your site works well with screen readers).

I'd love to hear of more tools or other things that can help if people have suggestions!

0 http://colororacle.org/

1 https://cran.r-project.org/web/packages/viridis/vignettes/in...


Well-run email clients and servers are in an OK state today regarding encryption, and the weaknesses that exist are more related to adoption than technology. Take a typical office setup: your email client communicates over TLS with your central mail server (e.g. Exchange, Postfix) to retrieve or submit messages. This is true with webmail clients like Gmail and Outlook Web Access, as well as ones like Outlook/Mail.app/etc.

When sending or receiving email from other organizations, your mail server will communicate over TLS as well. Virtually all ISPs support TLS-protected SMTP today, and if you run your own server you can set it up easily enough. Google's Safer Email Transparency Report publishes statistics about the percent of encrypted email between Gmail and other top email ISPs [1]. TLS wasn't always widely supported in the past -- just as with web servers -- but any modern installation today will support TLS.

Modern email installations and usage are secure against passive surveillance as well as HTTPS. Where email is still weak is in its usage of opportunistic TLS (still common if not the default). An adversary capable of conducting a man-in-the-middle attack can force connections to fall back to plaintext, or can present a bogus self-signed certificate since many servers and clients do not expect a path-validated certificate.

There are defenses against those attacks, though. You can configure your SMTP server to require TLS, and to accept only path-validated TLS certificates from trusted certificate authorities. This will prevent an adversary from forcing your traffic to plaintext, and will prevent them from substituting a bogus self-signed certificate. With these protections in place, one can achieve a fairly good measure of security with basic email.

This is not to say that email is suitable in all circumstances. For high sensitivity use-cases one should consider attacks on infrastructure like the mail server: as we saw in this year's political campaigns, mail servers can be a trove of confidential information. One benefit of the GPG approach is that the mail server does not need to be trusted with the confidentiality of the communication, and so cannot compromise it.

[1] https://www.google.com/transparencyreport/saferemail/ The systems that show up as plaintext in this report are largely older commercial bulk email sending systems; all consumer-oriented ISPs adopted TLS a while back.


I would dispute your contention that 'most' ISPs support (opportunistic) TLS for SMTP. Looking at the email I receive, both at gmail and self hosted domains, almost no one will do TLS - only big internet companies like eBay. No clients that I'm aware of maintain a history of opportunistic TLS success.

Maybe Let's Encrypt will change TLS adoption but I doubt it. It would need Google/Hotmail/Yahoo to increase the spam score for non-encrypted mail. But it's still way too complicated.


There are defenses against those attacks, though. You can configure your SMTP server to require TLS, and to accept only path-validated TLS certificates from trusted certificate authorities. This will prevent an adversary from forcing your traffic to plaintext, and will prevent them from substituting a bogus self-signed certificate. With these protections in place, one can achieve a fairly good measure of security with basic email.

This only works if you're also forcing DNSSEC: otherwise, the attacker can substitute their own MX in your DNS responses.


> PGP needs to be as simple as the SSL Lock in a browser if there is to ever be any hope of widespread adoption.

Something like the level of confidentiality feature in (soon to be) caliopen[1]:

>> What is behind the idea of confidentiality level?

In Caliopen, every element has its own "confidentiality level" which tells the user what is the security level for any contact or message or conversation...

Each terminal declared by the user is graded according to its use (e.g. strictly personal, at home or as a public access within the enterprise), its type (a phone is necessarily less secure than a desktop PC, as it is much easier to loose). When used for the first time, a non declared terminal receives a note of zero.

Incoming messages are rated whether they are or not encrypted, and also according to the type of encryption key. The type of transport (secured or not) influences the rating as well as confidentiality level associated to the related contact if it is known. The algorithm that rates a conversation is more complex, but takes into account all described elements.

For a user's contact, the confidentiality level is equal to the global confidentiality level of this contact's account: this global confidentiality level is the only public element of a CaliOpen account.

Finally, every CaliOpen instance should eventually geWhat is behind the idea of confidentiality level? <<

[1]: https://caliopen.org/


> I don't think the "WoT" is conceptually flawed, and frankly, the argument that "people of average intelligence" can't grasp the concept comes from a very high horse and is also untrue.

I disagree. I have written email encryption software, and I find WoT complicated.

In light of this recent article on HN https://news.ycombinator.com/item?id=13111768 I suggest you re-evaluate the level of ability of the average user.


Yeah. It takes some reading to understand the different levels of trust for a key. Even rewording those levels could be effective, if perhaps a bit verbose.

[]Distrust []Trust only this key []Trust this key to automatically trust other keys []This is my key

Obviously I dont know what all the levels exactly mean. but as far as I can tell, these are the levels of "web of trust", where for it to truly be a web, #3 should be the default.

I either dont trust someone, I trust someone to represent themselves, I trust someone's to vouch for of others, or I am that someone.


There's also context. I trust the government to represent my bank. I don't necessarily trust it to represent my friends etc.


I have to partially disagree with that.

Calling PGP an utter failure is an understatement. Just like calling a cat a small tiger.

PGP is possibly the WORST experience in usability for any well known software that ever lived.

This thing should be taught in courses for decades to come as how to fail a product by 1) having no UI 2) no integrations with anything 3) zero usability 4) not even trying to give a fuck about normal users 5) in fact, not even trying to make it possible to use for advanced users.

---

You want signed email & identities. It's simple.

Just get the national government to distribute RSA USB keys to every citizen. Then they can use them on public government websites (taxes & jobs stuff) to confirm poeple's identities, just plug in the key. Quick and simple. (And that's not incompatible with ALSO asking for a password that was send in a different paper letter. 2FA-style.).

Then later, citizens can sign the emails they send to everyone with gmail/hotmail because they'll add the feature to recognize the national USB identity key, now that there are X millions people using it.


Why the fuck would you ever insert a government-provided USB key into any computer you actually cared about, much less actually use any government-provided key? The national government is the prime adversary. I mean, seriously, Alice and Bob want to communicate, and your solution is that they should use Eve as a courier!??


FWIW: your point--and it's a good one--is better made without the yelling.

And to answer you, though I don't speak for the person you were relying to: the U.S. government isn't even in my threat model. If the Eye of Sauron points my way, I lose. And so, given that as a prior, a universally trustable third party is not a bad idea. The implementation might totally suck (and I think it's a better practice to have a wide set of vendors and making the acquisition of such a key subsidized, rather than directly provided, by the .gov), but the central point of trust doesn't, and it isn't inherently a problem for it to be state-operated.


James Mickens has an awesome point roughly along these lines. It is one of my all time favorite lines about security up there with some Gene Spafford stuff.

""" If your adversary is the Mossad, YOU'RE GONNA DIE AND THERE'S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they're going to use a drone to replace your cellphone with a piece of uranium that's shaped like a cellphone, and when you die of tumors filled with tumors, they're going to hold a press conference and say "It wasn't us" as they wear t-shirts that say "IT WAS DEFINITELY US," and then they're going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them. """


I take your point - though I was not intending to yell, but to represent my state of alarm and bafflement, as the original poster's worldview is both alien and frightening. It's as though someone were circulating one of those "modest proposals" to improve automobile security by installing a sharp metal spike on the center of every steering wheel, without realizing that these things are supposed to be jokes.

A universally trustable third party key provider might not be a bad idea in itself, but a national government - any national government, I was not specifically referring to the US government - seems about as non-trustworthy as any third party gets. To my way of thinking, the whole point of cryptography is to allow the weak to protect themselves from the powerful, and giving the organization which is at least nominally the most powerful agent operating in one's area the opportunity to poison your crypto before you even begin seems... counterproductive.


There's a power curve though, yeah? That most powerful agent can just drag you off and apply rubber-hose cryptography. Or just drone your ass. At some point, worrying about what else they can do is kind of a whateverburger.


They can do that, but it'snot cheap for them. Giving the government total, automated access to your communications - and that's what we're talking about here IMO - lets them (and you, admittedly) avoid the whole beating scene. In real terms, it would drastically change the balance of power.


> Giving the government total, automated access to your communications

Hello no. Absolutely not.

We're talking here about a state distributing USB keyz with a certificate to each citizen. The certificate is 'vouched' by the CA, it confirms the name of the citizen and it can be used to sign stuff thanks to private key cryptography.

One usage could be to access public websites, and use that USB key to log in and confirm your identity.


You're correct, I was distracted by some posts in this chain that seemed to be talking about using the key for encryption, not identification.

So the primary threat would be impersonation by the trusted party, which would erode the trust pretty quickly. I'm still a bit wary of this approach - do you think we'd manage to keep the keys from also being used for message encryption?

Where I live, there's actually a system somewhat like this in place - banks etc. can provide identity verification to websites upon request. You log in to the bank's system with your account and one-time pad they provide, and the bank tells you what information it will pass on to the requester. It seems pretty decent, and might actually be better than the USB key approach.


Certificates use asymmetric keys, they allow to exchange encrypted messages, among other things.

Now, that doesn't mean that we have to encrypt stuff. It could be used for the authentication/identification only if that's all one wants to do.

> It seems pretty decent, and might actually be better than the USB key approach.

Same thing. Look at RSA SecurID keys, they do certificate + OTP generators. I've had those at a previous organization, it was well integrated and nice to use.

The certificates provide the identity. A private key or an OTP allows for authentication.

There are multiple ways to handle the identity and the authentication with 2 factors. Exactly what to distribute and who will distribute it is an implementation details.


Sure, so the point is to avoid their notice. That's harder to do when they can snoop on your communications.


Not everyone is using encryption to stand up to the man... the national government isn't everyones prime adversary...


I wonder if there's a niche for a USB firewall dongle that you can plug anything in with and it will guarantee it's only treated as a file system or something else benign.


The device could be something like a modified USB condom, but with more processing power. It would have to have all drivers locked unless a sensor verified that nothing was plugged into the suspicious end, and use a fundamentally limited set of drivers. On connection, it looks for any filesystems on the suspicious device, mounts them, and offers them up by proxy as filesystems to the host machine. Nice idea.


> Just get the national government to distribute RSA USB keys to every citizen.

I lived in a country that did exactly that. And it was a disaster. The keys were trivially easy to steal, even by accident (personal experience here), and you still have the same trust problem as before, except that with a central authority now you do not have as much control.

I have also used the electronic-signature-comes-with-your-ID-card thing, and it was a similar disaster, with dodgy drivers and half-arsed crypto implementations in common software. E.g., try using the same token in Firefox and Thunderbird (or anything else) at the same time.

PGP is fine. It's just that proper security is not easy. And the same applies in the physical world as much as in computing.


It's 2FA, it's doesn't rely ONLY on the key for authentication and a token can be revoked easy if stolen.

The implementation and the technology has some challenges to be executed. Just like everyone tech projects, that doesn't have GooMicroZon people. Nothing special ^^


>The keys were trivially easy to steal, even by accident

So distribute keys on smart cards that don't allow you to export the key. This is what Estonia does, and - concerns about their election infosec aside - it seems to work pretty well.


> So distribute keys on smart cards that don't allow you to export the key

That's what I covered in the second paragraph. :-)

The thing is, both those implementations were a disaster from either a technological or a security point of view. We're not even getting into whether a central source of trust is a good idea or not (you will look at the state of HTTPS and make up your own mind on that). So, to repeat, proper security is hard.


> I lived in a country that did exactly that.

Which country?


Interesting.

If you don't mind me asking, what country?


The first half of what you said, everything about PGP usability, is on point.

The second half--the federal government as centralized national PKI, generating everyone's private keys for them--is hilariously terrible.

I can almost hear Moxie, Matt Green, Trevor Perrin, and every infosec researcher and crypto engineer and privacy advocate facepalming in unison.


Guys. It doesn't have to be state controlled okay? It can be a SV startup if you prefer :D

The government is just an example because they already handle ID for everyone, and they need it to provide their services. It makes sense for them to go digital at some point and to guarantee the ID.

I didn't know that Americans were so anti-American ^^


Being distrustful of your government is completely "American." Having the U.S. federal government issuing an ID that is mandatory would make many in the U.S. raise a huge stink.


Good thing that social security number thing never took off I suppose.


It's a bit of a Continental European view that "they [the national government] already handle ID for everyone". Universal government ID isn't historically the case in countries descended from the UK.


Go read up on the whole Number-Of-The-Beast thing and mix that with a heavily Christian-influenced view of politics (which is there, even in the politics of not-particularly Christian people). We are talking about a country where some people view having a Social Security Number as a Very Bad Thing. Don't even get me started on the Hillbama-gunna-take-er-gerns crowd. They'll be blaming Obama for whatever well into the next decade.

We are a majorly paranoid society, for no particular good specific reason that I can see.


Really? Which national government? Why wouldn't they backdoor it?


Any government who wants to bring 2FA to its citizens.

Shipping a national backdoor is an entirely different topic. Please focus on the use case at hands.


I think that issue is central. You're describing a scheme with a central, trusted authority distributing keys. The question is, what's a central authority we can all trust? I think many people wouldn't trust any government. At that point, the design crumbles.


Well. You trust your government to issue national ID card and e-passports already.


I'm guessing you are from Europe because the idea of distrusting your government appears to be really foreign to you. According to Wikipedia, "The passport possession rate of the U.S. was approximately 39% of the population in 2015."


Yeah I am.

I guess that makes sense. People in the UK don't have any "national ID card". Must be a culture thing.

I guess that makes sense. That's why I have to go through fucking stupid private background check agencies for everything in the UK.


Do you think only 39% of Americans because the other 61% are all terrified of their government? Or might it be because international travel is a luxury not everyone can afford?


> distrusting your government appears to be really foreign to you.

That "U.S. Government" collecting, indexing and analyzing all data of any citizen must be an entirely different entity from the "United States Government" after all.


The US has strange hangups about national IDs.


Those are identification. Carrying a passport doesn't let anyone listen in on my inane banter with my friends.


Just because I have and use such passport does not mean I think its a good idea or trust it completly.


Brazil is heading in that direction.

Every single corporation here and even freelance service workers are required by law to have a key/certificate pair to be able to do file taxes.

Pretty much every single commercial transaction is signed with a certificate.

It is not yet required for every individual citizen to have one, but everything points to that happening at some time in the future.


> Just get the national government to distribute RSA USB keys to every citizen. Then they can use them on public government websites (taxes & jobs stuff) to confirm poeple's identities, just plug in the key. Quick and simple. (And that's not incompatible with ALSO asking for a password that was send in a different paper letter. 2FA-style.).

Estonia does provide S/MIME certificates for every citizen, although I've been told that they're not particularly heavily used.

Between Estonia and the DoD, there's a few million users of S/MIME. Whereas PGP has a few tens of thousands of users. It's clear which product won out in the marketplace.


The standard for adoptability isn't the average person at their peak hours of attention and focus. It's the drunk teenager at 2 in the morning fumbling around in the dark.


A functional WoT should be no more difficult to use than managing your Facebook friends or contacts on your phone. Neither of those are difficult tasks, and are achieved by normal users every day.


Yeah I'm sure it's ridiculously simple and all the experts here saying it looks like the idea itself is fundamentally flawed are wrong.


I never said it was simple, and nobody is. It's clearly not a simple problem, or we'd have a better solution by now!


> all the experts here

Which experts? And what are those fundamental flaws?


What is a “functional WoT”? The main trouble here: it's a really hard conceptually, if you delegate that hardness to 3rd party you will completely lose WoT essence.


You might want to reconsider your analogy. When I was a drunk teenager at 2 in the morning fumbling around in the dark, my attention and focus was at 110%!


And you'd need to rely on a web of trust to avoid getting an infection.


>A technical criticism of PGP/GPG is of course also possible. The whole thing is a museum of early 1990s crypto, with default ciphers like CAST5 and messages not being authenticated - and even if the message is authenticated most parts of the PGP protocol are not, meaning that you got that big bunch of C code maintained by that one German guy over there that parses unauthenticated bytes that you shipped through half the internet with a big neon-red sticker on it saying "I'M PGP PLEASE TAMPER WITH ME".

Honestly, this made me laugh.

But it also makes me think. GPG and PGP are ancient pieces of somehow working code that probably should not be allowed to operate any significant parts of human communication.

IMO we should aim for a Crypto like Signal presents it; simple yet secure enough for most users.

It might be worthwhile to bring Signal (or atleast the idea) to other protocols like E-Mail.


> It might be worthwhile to bring Signal (or at least the idea) to other protocols like E-Mail.

I'm glad to see someone else thinking the same way as me on this! I got myself an idea for Yet Another Secure Messaging App a while back. After a little market research, I kinda decided, well, everything that I want to do can already be done by PGP, or Signal, or Whisper... except that where they are easy to use, they don't integrate with e-mail, and where the integrate with e-mail, they are not easy to use. So, there's still room for more diversity in the market of providing easy-to-use secure, verifiable messaging, especially without trying to replace e-mail wholesale like so many messaging platforms (secure and otherwise) do. And maybe I'll actually get around to building it someday.


The easy part is the crypto, that's been done to death and back.

Signal has managed to do the "How can we exchange keys while atleast one of us is always offline?" part. So a good and somewhat PFS key exchange should be possible too.

Integrating that seamlessly with email is gonna be hard and require a service to register emails or domains that support the new protocol. Otherwise you end up with the PGP situation.

One want people's mail to automagically encrypt when both have it. Automagically is the best security there is for Joe Average. On the big list of security problems you face for Jane Average, "Werks Automagically" is Point 1 written in golden ink by the pope himself in 72pt fontsize and "Secure against State Adversaries" is Point 2 written in silver by the pope's cat on the second page in 16pt fontsize.

One might also want to introduce a benefit, like disabling it for spammers by having some kind of verification (Phone Number or something?) and heavily police HTML formatting so that little Green Icon next the email means more than "this one spent 30 seconds to find a large prime pair" and more like "this email is probably safe to open, nobody will track you and nobody is going to sell you fake viagra".

People should want to use it, not have to use it to be secure or something.

But as you said, that all requires work and 99% of my time I personally like being unproductive, so I guess it'll never happen.


> It might be worthwhile to bring Signal (or atleast the idea) to other protocols like E-Mail.

I like this idea a lot!


> I don't think the "WoT" is conceptually flawed, and frankly, the argument that "people of average intelligence" can't grasp the concept comes from a very high horse and is also untrue.

The average person already struggles to even use a computer and to accomplish tasks which seem very basic to us. [0]

I can't imagine them figuring out how to use PGP. The WoT is a complicated system to understand for even technically proficient users, let alone average ones.

[0] https://www.nngroup.com/articles/computer-skill-levels/


The web of trust is fundamentally flawed and for very simple reasons.

First of all, the WoT is based on belief networks. What is a belief? Is it based on a probability? Is it binary? Can we believe something negatively? Whatever it is, we know it is subjective and difficult to define quantitively.

Second of all, even when formally specifying a belief using a certain quantity, we need to calculate transitive beliefs (person A beliefs with certainty x the identity of person B, C and D, who each believe with certainty y the identity of person E and A). This requires doing some matrix inversions for which the whole matrix needs to be known. However, who do we trust to provide us with the matrix? If I can provide the matrix, I can also tamper by adding extra nodes, thus shifting trust towards nodes I control. We can provide the matrix in a distributed fashion, but how do we prove completeness?

Then there is a temporal aspect to WoT which is hard to tackle. I might have a belief now, but that does not entail I will belief it tomorrow. Without a continuous stream of events on a subject, the precision of my beliefs are always diminishing. However, the current WoT networks do not take this fully into account (or transitive beliefs for that matter).


> Web of Trust implies such a glaringly obvious visual metaphor that I am truly in awe that not a single program works that way.

Yeah, but it doesn't actually work. The fact that I know you (i.e., I'm willing to certify your identity) implies nothing about my trust in identities you certify. For all I know, you mint a new identity every second, and then use all those newly-minted identities to cross-sign one another's keys.


Some time ago I had an idea for a web presentation of WoT - I thought it would be fun to explore and also maybe present a kind of gamified incentive to build WoT. I also thought that given how old the thing is there should be good libraries that would make writing it easy. Unfortunately this is was not the case - there are no real libs - there are only some workarounds that shell out the execs to do the work and then parse the output and it would be to resource expensive to run it at a web server.

I don't think WoT or PGP are fundamentally flawed - but we need a lot of experimentation to make it work and for that we need good libs.


"I wonder if pgp is fundamentally flawed, or we have a deep conceptual usability issue here."

I think it's the key model that's fundamentally flawed rather than pgp itself, which I believe the author of the article is also asserting.

In cryptography, it is often explained that despite the fact a one-time pad is guaranteed-secure (given various conditions I'm eliding), it is not practical in the vast majority of cases because of a chicken-and-egg problem: How do you distribute the one time pad in the first place? If you do it insecurely, it's a waste of time. If you can do it "securely", why not just use that secure channel to send the message in the first place? OTPs can still be useful because you can establish a secure channel once for a limited duration of time and then use it to temporally shift your security into the future, but that's a relatively rare use case. (That is, the vast bulk of encryption is being used between people who may never have had a "secure" channel between them; think HTTPS here.)

Similarly, PGP's got this significant problem where given that you have the correct keys and that you know you can trust them, it secures your communication quite effectively. But the question is, how do we get to the point where you know that you have the correct keys and you can trust them? Well... that's a hard problem itself. Especially considered over time.

So alternate models must be pursued.

Like the author, I think the Keybase approach is a good idea. In fact I'd even suggest that the idea should be generalized away from "social media accounts" to just "potentially unreliable mechanism" in general. If I have 6 mechanisms for asserting identity on my key, each of which are 95% reliable over the course of a year, then from an absolutist security point of view, that key is still insecure... but assuming even modest independence between the unreliable mechanisms (assuming naive total independence is definitely incorrect, once one is hacked the others are certainly more likely, but neither is it the case that one hack guarantees all others can be hacked), it's still much more secure than nothing at all.


> How do you distribute the one time pad in the first place?

Something I've wanted to make for a while now, that should be possible to make with almost any cheap embedded microcontroller, is a hardware dongle that stores OTP pads. This would be a generic character device that could be integrated into existing chat programs.

* Each device has a hardware RNG, e.g. [1] or similar

* A port that allows two devices to connect. When connected, they each start generating random numbers, sending a copy to the other device. They both store the XOR of each device's random number as the pad.

* A USB interface accepts plaintext, the device generates the cyphertext, while enforcing deletion of the used portion of the pad. Decryption is handled with a similar interface, so the pads never leave the device.

* The device would provide to the host how much pad is remaining, to be used in the UI. Warnings should be provided when the pad is running low, etc.

The goal is to utilize existing knowledge and experience. Schneier (and others) recommend[2] that passwords be written down because people's understanding of physical security is better than their chances of memorizing enough entropy to actually make a usable password.

This isn't trying to solve the general WoT problem. Instead, it tries to solve a piece of it in a way that most people can understand. Connect devices when you meet in person, and you gain a certain amount of secure chat. Refill by meeting in person again.

It would be easy to extend this idea to provide other features (e.g. generating pubkeys), but since the goal is a simple device that is easy to understand, avoiding feature creep is important, at least initially. Features like WoT will be easier to implement if there is existing infrastructure that can be exploited.

[1] http://holdenc.altervista.org/avalanche/

[2] https://www.schneier.com/blog/archives/2005/06/write_down_yo...


You don't need to store a true one time pad. Keystreams are enough. So, while your device may act like it delivers a one time pad, it could instead draw a pseudo-random sequence from a chacha20 stream. That way, any synchronisation you do lasts for life.

But if we go to all this trouble, we might as well use public key cryptography, it's even easier to use. Internally, the dongle will be quite complicated, with stuff like Curve-stuff, Xchacha-something and poly-whatnot. What the users needs to know is simple:

Once initialised, your Dongle can publish a public a "fingerprint" that is unique to it. To decrypt messages encrypted with this "fingerprint", you need your dongle. To sign messages according to this "fingerprint", you need your dongle. If you lose it, your "fingerprint" becomes unusable, no recourse. If it gets stolen, the thief will be able to impersonate you, unless you did the sensible thing and locked your dongle with a secure passphrase (think Diceware).

Now we engineers can figure out how to make that dongle easy to use and secure against any compromised computer it may be plugged in. (We don't want the dongle to become untrustworthy just because it got out of your sight during lunch).


If you only use the OTP data for a protocol that exchanges symmetric keys, then you could effectively extend the capacity of such a device. I'd buy one - it's an enhanced business card.


I suggest implementing version 0.1 as either a frontend to GPG with key on an existing smartcard (Yubikey or similar), or at least something that uses the existing OpenPGP standard for encryption. That way you don't have to worry about doing the cryptography from scratch and can focus on the UI side (which is hard enough on its own) and the fingerprint exchange (which is what your "extra port" needs to be). You don't have to make sure your device gets critical mass, because existing PGP users can interoperate with it (there already e.g. PGP plugins for instant messenging programs that you could adapt rather than having to write your own plugin ecosystem). And you may well end up producing something that's more useful to more people.


> frontend

That's exactly the kind of complexity I'm trying to avoid. I would get zero benefit from existing public key infrastructure, because this is a device that enables 1-to-1 communication only. It may be possible in the future to exploit such a device to authenticate GPG keys, but not in the initial version.

> cryptography from scratch

I'm not doing much crypto other than generating random bits (hardware RNG with whitening). One of the reasons for doing this in an embedded device is to keep everything important isolated on the device (data diodes are useful) where the "one use only" rule can be enforced. The computer-accessible attack surface would be extremely small; it's mainly just a USB (or whatever) character device you write plaintext to and read back the OTP'd cyphertext.

> fingerprint exchange

There is no exchange or fingerprints. The entire goal is to have a type of secure communication that is easy to understand, so it can't have complexity like handshakes to exchange stuff or key management. Even if its hidden behind a UI, those features add complexity that affects how you use it.

> gets critical mass

Again, the goal is to not need critical mass. You only need that if you're trying to solve the general WoT problem. I'm only trying to provide communication between a pair of devices that will have to meet in person for synchronization.

> useful to more people

I'm assuming that people that know about GPG can already handle setting up their own secure communication. What I want to try is something that provides some features that everyone can understand. Trying to solve the entire problem at once has always made tools that were too complicated to understand if you have never of crypto. You might consider the device I've described as a kind of "training wheels" for the idea of using crypto.

> producing something

I'm not producing anything right now; this isn't a product of business plan. If I ever find the time and money to work on this project, it will just be a handful of hand soldered RNGs on Arduino boards or similar.


> That's exactly the kind of complexity I'm trying to avoid.

It's only complexity for the implementation, not for the user.

> I would get zero benefit from existing public key infrastructure, because this is a device that enables 1-to-1 communication only.

You'd get to reduce the storage requirements massively and make the devices much more reusable, because you wouldn't have to do exchanges and store the results.

> There is no exchange or fingerprints. The entire goal is to have a type of secure communication that is easy to understand, so it can't have complexity like handshakes to exchange stuff or key management. Even if its hidden behind a UI, those features add complexity that affects how you use it.

The UX is "plug these two devices into each other via some special custom port" either way, no?

> The computer-accessible attack surface would be extremely small; it's mainly just a USB (or whatever) character device you write plaintext to and read back the OTP'd cyphertext.

If the UI is a unix character device then your target audience is a subset of the people who already understand GPG.

> Trying to solve the entire problem at once has always made tools that were too complicated to understand if you have never of crypto. You might consider the device I've described as a kind of "training wheels" for the idea of using crypto.

Right, but part of the point of training wheels is you attach them to a regular bike, you don't use a completely different device. A specialized frontend that only uses a very small simple subset of GPG would be very helpful.


> complexity for the implementation

Minimizing complexity is also important when writing a security feature. The entire firmware shouldn't be very large (bugs/kLOC is constant(-ish)), and dependencies increase attack surface.

> reduce the storage requirements

I don't see that as being a huge problem, because flash memory is cheap and I should be able to generate pads very quickly. Remember that the only problem I'm trying to solve is secure chat (text). 1MB of pad is a lot of typing.

I do like the idea mentioned in another comment about using the shared random secret as a stream of symmetric keys, which would nicely reduce the rate of pad usage without adding any more complex semantics.

> If the UI is a unix character device

I'm describing it to you as a character device, because I assume you know approximately what that implies (serialized data stream, etc). The UI for the user, for now, would probably be a plugin for libpurple or something, if I ever get time to write it.

> a very small simple subset of GPG

I've been trying variations of that idea for over 20 years. Many people need a far more rudimentary education about the idea of using crypto. I want to teach the idea of applying security at each end of the conversation. I want to teach the habit of putting an envelop on communication, even when it's just to a friend. I want to teach taking some of the responsibility for your own security instead of relying on 3rd parties ("the cloud").

I've tried to teach very small subsets of GPG already. That didn't work, so I'm simplifying the scope into something that will hopefully be easier to understand.

The "minimal GPG wrapped up in a very simple UI" device that you're talking about would make a great device to graduate into.


> dependencies increase attack surface.

In the general case yes, but GPG is probably or at least should be the most carefully audited codebase in the world.

> I do like the idea mentioned in another comment about using the shared random secret as a stream of symmetric keys, which would nicely reduce the rate of pad usage without adding any more complex semantics.

Right, at which point you already need a high-quality symmetric encryption implementation (and definitely need to worry about timing attacks and other side channels - quite possibly something you need already at the true-OTP stage). Such as the one in GPG.

> I've been trying variations of that idea for over 20 years. Many people need a far more rudimentary education about the idea of using crypto. I want to teach the idea of applying security at each end of the conversation. I want to teach the habit of putting an envelop on communication, even when it's just to a friend. I want to teach taking some of the responsibility for your own security instead of relying on 3rd parties ("the cloud").

All good things. I just struggle to believe that the amount of internal-only simplification you get out of making the device OTP-only is worth the cost of requiring storage, becoming text-only, having to have one device for each person you communicate with, having no way to send messages to people you haven't met, and avoiding compatibility with what is still the most widely deployed cryptosystem with any hope of being secure against government-level threats (and the only cryptosystem that we have the NSA on record as being unable to break). I can certainly believe that most of these things aren't worth exposing in the UI, but deliberately using a different standard for the implementation to ensure that you will never have the ability to add even one of those things should they actually prove desirable seems like a poor cost/benefit.


> How do you distribute the one time pad in the first place?

I'm fairly naive to this area, but wouldn't video chat initiated with public keys suffice? Then confirm identities and exchange secrets. To me this seems substantially equivalent to in-person key exchange for non-Three Letter Agency threat models. 20 years ago this wouldn't really have been feasible, but today it (mostly) is -- from a quick glance at the FAQ, Signal may even support something like this already.

(If your threat model includes "abduction and coercion", then aren't you kind of hosed even with previous in-person OTPs?)


A video chat is not enough to safeguard secrets to be used in the future.

For one, if the video chat is secure enough for an otp exchange, the otp isn't needed.

Secondly, if your video chat gets recorded, which may very well happen, you need to use ephemeral keys.

Thirdly, since the video chat is likely recorded, at least the meta information, the effective security of your otp degrades over time, as new breaks or speedup s are created for the video chat cipher.


Interesting, thanks. Am I understanding correctly that point 2 & most of point 3 are risks because of the possibility of either future device compromise, or e.g. quantum decryption technology? These are very general risks, so why do they apply here any more than elsewhere?

I realized I probably should not have replied to the part about OTPs specifically. What I'm curious about is remote trust verification via secure video.


Partially, it's not just future device compromise but also Internet recording. It is best to assume that any communication over the Internet is recorded. From that standpoint, once the keys (not the device) are cracked the internal secret is also disclosed. This was why I recommended ephemeral keys.

By "cracking the keys" a cryptographic break is not always required. It can also happen via disclosure, a weak implementation, problems with the protocol, etc. One can scan a list of recent vulnerabilities for this: session reuse, master secret reuse, session resumption, heartbleed, etc.

I would call these out in particular here, because secrets are being exchanged. If those inner secrets are used to protect (directly or indirectly) multiple messages, the key disclosure becomes more pronounced.

You are quite correct regarding quantum computing. QC is guaranteed to break elliptic curve, DH, or RSA for example. The determining factor is the number of q-bits.

What do you mean by remote trust verification via secure video. That sounds quite interesting. Do you mean facial recognition inside a channel assumed to be secure, as a secondary validation of an otherwise "pre-trusted" party?


If a video chat with public keys is secure enough to exchange a one time pad, why would you need to bother with the one time pad at all?

By transmitting your OTP it is no stronger than the method used to protect it in transport, so if that transport method is secure enough to guarantee the security of the OTP, why not simply use that method for everything and forget about the OTP?


I did say "exchange secrets" for a reason -- that secret may be a key for later use (e.g. for data dumps), or actual information.

What I'm trying to understand is whether the (relatively new) feasibility of interactive video channels allows for building roughly the same level of trust as would be provided by in-person key exchange. I'm basing this on the understanding, possibly incorrect, that encryption with a public key allows for creating a secure communication channel, but not necessarily a trusted one. The hypothesis is that the capability to conduct interactive video provides a way to verify identity and establish trust at roughly the same level as would be provided by in-person exchange (again, assuming 1-1 trust, and excluding TLA threat models).


I might trust a video chat today to verify an identity, but only because it would be a relatively new method. Definitely not in a few years if it ever took off. It's already possible to forge a talking head and fall back on "sorry, bad connection out here in the field" to hide glitches.


You are forgetting a key part of the 'trust' thing; you have no way of knowing if someone is man in the middle attacking your video chat.

Example: Alice wants to video chat with Bob to exchange the secret key and verify identity. Mallory sets up a MITM attack, and gives her own public key to both Alice and Bob. Alice and Bob think they are securely talking with each other, but they are actually securely talking with Mal, who decrypts the video, watches it, then forwards it on to the other person.

This is why you can't have a secure communication channel without trust; you don't know if your secure communication is being intercepted, read, and then passed on.


> How do you distribute the one time pad in the first place? If you do it insecurely, it's a waste of time. If you can do it "securely", why not just use that secure channel to send the message in the first place?

Because you may not have any messages to send at the time of the secure exchange of OTPs. Do note that one time pads are (or at least were) commonly used in the military.

> But the question is, how do we get to the point where you know that you have the correct keys and you can trust them?

That is not a technological problem per se, but rather a social one. Imagine that when you exchange phone numbers (or Farcebook IDs, if you're into that) with your work colleagues, or friends, or fellow attendees at that developer meetup, you also exchanged public keys.

Mechanically, the interaction is at about the same level of complexity, and effectively, as has already been mentioned, the web of trust already exists (Farcebook, ChainedIn, and all the other bollocks).

If any of those decided to implement secure end-to-end comms using PGP and offered you the possibility of uploading your public key for dissemination to your "friends", PGP might become ubiquitous in a matter of weeks. At a smaller scale, German email provider GMX is doing exactly this, by the way.


> Like the author, I think the Keybase approach is a good idea. In fact I'd even suggest that the idea should be generalized away from "social media accounts" to just "potentially unreliable mechanism" in general.

It already has this to a small extent. You can sign other stuff like domain DNS entries or HTTP servers (by hosting a file).


"How do you distribute the one time pad "

Well you hand it over to the person you want to communicate with when you see them?

Obviously that doesn't work in many use cases, but in many other cases it does: many of the most important secrets are typically shared with people you already know and have met before, no?


When did you meet Paul Graham and hand over to him the crypto material you are using on the HTTPS connection you are reading this on?

The vast majority of encryption in the real world is between people who did not meet and exchange crypto info.

(Note this is specifically about one-time pads. While I agree the Web of Trust has failed, it is one effort to circumvent the problem.)


If he cared then he'd probably be the one handing over crypto material and instructions (about whatever crypto), as you probably need some clout to make it happen.

If even he says "write me an email in plaintext" then I'm not too hopeful for crypto in General.


That's exactly what the OP was talking about when s/he said "temporally shift your security".


Like when I buy a new laptop from new egg, or contact Laura Poitras with a hot scoop...


No.

But when you discuss the real secrets with a journalist or business partner in another country. Email communication with family members, business partners.

Potentially web traffic with your company's or bank's website, why not.


Innovative new ways of distributing PGP key fingerprints are to be welcomed. But why would you not keep using PGP for the part that it's good at?


The author did include the standard UX-of-PGP-sucks arguments, but he was also making the point that some of the core models around PGP suck.

eg he was saying you can't share a key across multiple devices. Or if you do, you just increase your attack vector and your weakest link becomes the hotel wifi you plug into.

eg if your key does get compromised, now you have to rotate all your contacts, which if you distributed your key on a business card, is pretty friction-prone and encourages you to discount that weird activity that could have been a blip you saw on the hotel wifi.

The big one is if your key ever does get compromised, now all your past history becomes accessible. So he's saying there's some things that PGP is fundamentally bad at, and you need a new model, not just a band-aid UX fix.

> Finally, these days I think I care much more about forward secrecy, deniability and ephemerality than I do about iron clad trust. Are you sure you can protect that long-term key forever? Because when an attacker decides to target you and succeeds, it won't have access from that point forwards, but to all your past communications, too. And that's ever more relevant.


> eg he was saying you can't share a key across multiple devices. Or if you do, you just increase your attack vector and your weakest link becomes the hotel wifi you plug into.

So what are the options here? You can have a GPG key protected by any mechanism you care to think of (passphrase, smartcard, ...). You can share it between devices or not as you see fit, subject to the same tradeoff that is always going to be involved in that decision. I can't see any way to do it better?

> eg if your key does get compromised, now you have to rotate all your contacts, which if you distributed your key on a business card, is pretty friction-prone and encourages you to discount that weird activity that could have been a blip you saw on the hotel wifi.

PGP actually has very good support for key rotation by using subkeys - you keep your master identity key offline/secure and that's what other people sign, but you use it only to sign subkeys with short expiry times. People don't use it, but that's a UX issue.

> The big one is if your key ever does get compromised, now all your past history becomes accessible. So he's saying there's some things that PGP is fundamentally bad at, and you need a new model, not just a band-aid UX fix.

True, but I think long-term signing is often what you want. There are different models that make sense for different communication scenarios certainly.


Actually you also have to use it to sign /other people's/ keys.


True. But the point is you don't need to take it travelling with you.


I'm going with fundamentally flawed. Or perhaps more exactly, a solution for a non-problem.

Things PGP can do:

- Hide the contents of a message. But not the fact of a message nor who it's to. And it's only as hidden as a key that your recipient has to keep secret indefinitely.

- Permanently be incriminating, since the message can be as easily opened a decade from now.

- Prove you're you. Which is great for incriminating you. Also the proof is only good if your secret key is still secret, which probably isn't the case if you've been arrested. At that point, it's good for convincing people it's you when it's really the FBI.

- Authenticate keys through a trust mechanism so sparsely populated that unless you're actually in a spy cell, the chances of having a valid trust path from A to B is astronomically small.

- Distribute keys through what is really only slightly more sophisticated than a world-writable Dropbox.


Add one more thing: stop the NSA per the Snowden leaks. Everything else in the leaks failed that test. Using a solution strong against the strongest attacker is worthwhile to people wondering how good various solutions really are.

Far as a decade from now, that's probably all you need given the statute of limitations.


Statute of limitations isn't a blank slate that means if you can get away with something for n years you're off scott free.

I'm not googling this type of query at work, but typically as more information about a crime becomes available to law enforcement, the statute of limitations is reset. So if you're buying something illicit and securing communications with PGP, and the SOL is 5 years, if LE doesn't get the contents of that communication for 4 years, they still have 5 years to decide what to do with it.

All SOL means is that LE can't sit on incriminating information about you indefinitely and pursue charges decades in the future for minor crimes.


1) Key rotation can solve the second part of this.

2) Key rotation solves this, but you lose the ability to read old messages yourself. If you don't have the keys anymore you can't view the message.

3) This isn't unique to PGP? Or do you have an alternative? Because plaintext is infinitely less secure in this regard.

4) Depends how you determine trust of a user. In an ideal world you'd be correct. But I trust the person I've known for nearly 6 years is them when I signed their key, though we've never met IRL. Very possible it isn't them but is also astronomically slim of a chance.

Key rotation makes the WoT even more complicated and less trustworthy. That's a big problem.


Missing the point a little bit on 4.

Proving you're you is great if you're, say, Canonical distributing package updates to Ubuntu, where the adversary is malware distributors.

But where your adversary is eg: the FBI, then it promotes a false sense of assurance, because it's actually really easy to spoof someone if you can arrest them and force them to give the key password.


>because it's actually really easy to spoof someone if you can arrest them and force them to give the key password.

Country dependent [0]. Not enough evidence one way or the other for FBI coercing.

[0] https://en.wikipedia.org/wiki/Key_disclosure_law


s/FBI/anyone with a gun/g


> - Prove you're you. Which is great for incriminating you. Also the proof is only good if your secret key is still secret, which probably isn't the case if you've been arrested. At that point, it's good for convincing people it's you when it's really the FBI.

So, best practice: publish your private signing key publically if you ever get arrested?


It is interesting this realization that Power Users have very similar User Experience problems in Key Management that Novice Users have, but of a different degree. It does maybe speak to a deep conceptual usability issue in the WoT model. Maybe the tough learning curve has always been a symptom of the thousand papercuts of the Knowledgeable/Power User case and it is time to question the model and look for alternatives.

The Web of Trust is built on long term trust of objects that should be short term and plentiful and that does seem an inherent contradiction in terms. WoT "best practices" have always been that keys should never live that long (at most two to five years being an old received wisdom back when I was most actively exploring the WoT), but proper key signing involves lots of little contacts (or key signing "parties") that are slow to accumulate and should last a great deal of time, but are applied to a specific key.

Power Users can get some continuity between keys when rotating them by signing new keys with old ones before they expire, if they can manage that key that long and are prescient enough to build and sign a new key. (I know I lost continuity with my most trusted WoT key by not managing it well enough and I'm certainly not alone there; there is a great deal of churn in the WoT and lot of it is expired.)

So Power Users try for longer term keys with further risks and even larger key management issues and with those longer term keys they try to manage a coterie of smaller term keys exponentially increasing the number of key management issues.

Keybase seems to be the best bet at a trust model that distinguishes active keys from long term trust (social trust), and might be a good answer if they solve "average user" user experience.

Signal and WhatsApp and some of the other OTR-ish mobile apps with E2E encryption seem to have solved some of the "average user" user experience problems, but don't seem to have good long term trust models.

Somewhere in the soup maybe someone will solve more of the chicken-and-egg hurdles and evolve something that works for everyone.


>we have a deep conceptual usability issue here.

yes, yes we do. All this stuff seems easy if you have the curiosity so spend hours and hours reading dry documentation about how it works. This is to say nothing of actually getting your hands on the tech and inevitably having problems that require more hours of forum searchs, IRC, and other time drains.

It's not that "regular" people are too stupid to do this, they just don't see the value proposition in it. Even when the privacy issue starts to negatively affect regular people (think Black Mirror "Nose Dive") many still won't be interested in the technology to do what I wrote about above.

I spent a bit of time in the crypto community and immediately I realized there is a human resources problem. the communities, for the most part, have no one that understands or care about how to dumb the UX down enough to make the value prop work for regular people.


And it's not just regular people, it's developers, programmers. Most devs I know look at pgp and are totally capable of figuring it out, but what they say basically boils down to: ain't nobody got time for that


> I wonder if pgp is fundamentally flawed, or we have a deep conceptual usability issue here.

Last time I tried to use PGP on Windows, the gpg4win setup application crashed repeatedly during installation, and I had to use a walkthrough with screenshots because I couldn't figure out how to sign messages in Thunderbird.

Forget deep conceptual usability issues; there are tons of major surface level usability issues.


Is there anything that enables key exchange via smartphones? Ideally it should be as easy as a meatbag handshake.

Basically, if you can swap contacts via NFC then the pgp keys should go along with it.

It may have some theoretical weaknesses such as the exchange being MITMable if the users don't verify something on their screens, but I think having many more edges in the graph would make up for it since you might already have an expectation for that key through friends-of-friends paths.


OpenKeychain does this, provided both users have it of course.


To expand on this, OpenKeychain wants you to use the camera on your phone to scan a QR code on your friend's phone.


https://www.cylab.cmu.edu/safeslinger/ does simple handshake exchange and works w/iOS/Android but unsure if still maintained


So not really for iOS :/


> I wonder if pgp is fundamentally flawed, or we have a deep conceptual usability issue here.

IMO, the idea, the model, and implementation are flawed.

The idea that people care about a web of trust in general is bad. The model itself relies on the assumption that it's a popular piece of software that is used in the way it's intended. PGP itself is popular, but only for the fact that viable alternatives are thin. The software implementation is confusing to technical end users, and third party front ends are just as bad.


> The idea that people care about a web of trust in general is bad.

Well, the very question is kind of a type error: people don't know what a computer is in the first place. 1 in 4 can't use computer in any capacity, and 90% of the rest have zero knowledge of the underlying principles. They don't even know if they would care about a web of trust.

In the mean time, the powers that be are building us a network of universal spying. Oh well.


> I wonder if pgp is fundamentally flawed, or we have a deep conceptual usability issue here.

Why not both? PGP is definitely flawed with its lack of perfect forward secrecy.


Perfect forward secrecy requires interaction between the two parties. So, either you need to require both parties be online simultaneously for their first interaction, or you give up on E2E encryption, or you allow the first message to not have PFS. (After you've established two-way communication, you can use Signal's dual crypto ratchet mechanism to maintain perfect forward secrecy with offline operation.) Now, maybe that first message is the null message or just a simple low-secrecy "Hello" message, but you still need that extra initial round-trip message to establish PFS.

Of course, you need to delete emails once sent and once read in order for PFS to be of much value. However, without PFS, there's really no such thing as a deleted email, just emails the FSB/NSA haven't yet rubber-hosed you for the keys yet.


The conclusions here (avoiding long-lived per-identity keys and having the option to easily rotate and re-validate per-device keys) are very much what we've aimed for in the end-to-end crypto for Matrix.org (https://matrix.org/blog/2016/11/21/matrixs-olm-end-to-end-en...).

Rather than using a silo like Signal or WhatsApp, it is possible to get the flexibility of an open federated network built on an open standard, whilst still having the lighter weight approach of trust common to E2E messaging apps like WhatsApp. Or at least that's the hope :)


When I mention Matrix, a lot of people seem to pigeonhole it as a chat system alone because Riot is such a dominating part of the application ecosystem.

It would be really great to have more code and demonstrations available; adding Matrix was suggested for Mastodon[0] to potentially gain chat and private messaging features that aren't part of GNUSocial, but as of right now it's considered out of scope.

[0] https://github.com/Gargron/mastodon/


Hum, just found the bug at https://github.com/Gargron/mastodon/issues/311 - shame that folks there haven't grokked what Matrix is. Yes, pigeonholing it as a chat system is kinda missing the point, but it's an easy way to kick the tyres and prove its potential.

Once threading lands in Matrix we'll be adding in gateways for SMTP, IMAP, NNTP, Discourse, and possibly Gnusocial etc - either written by us or from the community. Then hopefully the bigger picture will be more obvious(!)


I would love to see a full-federated email-like (as opposed to chat-like) system built on top of Matrix.


Matrix really is the hope in this respect. I would absolutely love Matrix in combination with Keybase. This would essentially connect the summation of my online identity with my chat system.

It really does seem like a match made in heaven, but I understand how practically difficult this is.


the good news is that keybase now deals in the same EC25519 keys that we do :) the bad news is that they've never responded to any of our requests to hook up. Plus there's a bit of a philosophical mismatch given keybase are effectively centralised, even if they publish the root of their ID tree to a blockchain.

We need to solve decentralised identity somehow for Matrix anyway, so hopefully we'll find a solution soon :)


Interesting.

I don't propose that Keybase is adopted on the hole, but somewho we need to able to connect authentication system (centralised or not) with the protocols we use for chat. Maybe these can be made pluggable. Seems like a hard problem, Im thankful that nobody expects me to come up with a solution.

Whats the best information on the current Matrix identity stuff? I did not find really good information how this currently works.

Thanks for your work.


At the moment Matrix identity is a very simple centralised system: http://matrix.org/docs/spec/identity_service/unstable.html. The official bug for replacing this with something decentralised is https://github.com/matrix-org/matrix-doc/issues/712.


I'm currently testing out Riot with the wife, as a potential Telegram replacement.

So far matrix/riot are my favorite in terms of vision, but I have to say that the ux is still rather unfamiliar for my limited set if non technical users.

Still, thank you for matrix, awesome job so far.


yup, turns out good UX is a nightmare; who knew? :D it's being worked on atm :)


IMHO, the biggest issue with matrix, is that there's no free hosted solution for using my own domain/email.

Sure, I can rent a server and set up my own, but that's a huge commitment to "try out" a protocol which non of my acquaintances uses yet.


hm, can you point me at an email provider who provides free SMTP/IMAP/webmail or XMPP hosting for custom domains? I may be missing something, but I can't think of one...

The model is that you can try it out on the matrix.org server via riot.im or something, and then run your own if you like what you see :)


Zoho actually provides custom domain email for free.


What does SMTP/IMAP have to do with this? We're talking about early adopters of an unused technology. In my 2016, getting into email is far form being an early adopter.


Good points, but also I would like to point out that https://www.usenix.org/system/files/1401_08-12_mickens.pdf linked from the blog post was an entertaining read so for anyone that didn't read said PDF, do.


> “But James,” you protest, “there are many best practices for choosing passwords!” Yes, I am aware of the “use a vivid image” technique, and if I lived in a sensory deprivation tank and I had never used the Internet, I could easily remember a password phrase like “Gigantic Martian Insect Party.” Unfortunately, I have used the Internet, and this means that I have seen, heard, and occasionally paid money for every thing that could ever be imagined. I have seen a video called “Gigantic Martian Insect Party,” and I have seen another video called “Gigantic Martian Insect Party 2: Don’t Tell Mom,” and I hated both videos, but this did not stop me from directing the sequel “Gigantic Mar- tian Insect Party Into Darkness.

This is hilarious, thanks for pointing this out!


I like:

"It’s like, websites are amazing BUT DON’T CLICK ON THAT LINK, and your phone can run all of these amazing apps BUT MANY OF YOUR APPS ARE EVIL, and if you order a Russian bride on Craigslist YOU MAY GET A CONFUSED FILIPINO MAN WHO DOES NOT LIKE BEING SHIPPED IN A BOX. It’s not clear what else there is to do with computers besides click on things, run applications, and fill spiritual voids using destitute mail-ordered foreigners. If the security people are correct, then the only provably safe activity is to stare at a horseshoe whose integrity has been verified by a quorum of Rivest, Shamir, and Adleman."

For his claim "YOU’RE STILL GONNA BE MOSSAD’ED UPON" I still don't know how to interpret the fact that Snowden seems to be relatively fine. Maybe that he had the idea about the blind spots of the system in which he worked.

His opinion on PGP "web of trust":

"“Chains of Attestation” is a great name for a heavy metal band, but it is less practical in the real, non- Ozzy-Ozbourne-based world, since I don’t just need a chain of attestation between me and some unknown, filthy stranger— I also need a chain of attestation for each link in that chain. This recursive attestation eventually leads to fractals and H.P. Lovecraft-style madness."

It is an opsec problem that all the connections are then cryptographically provable.


> For his claim "YOU’RE STILL GONNA BE MOSSAD’ED UPON" I still don't know how to interpret the fact that Snowden seems to be relatively fine. Maybe that he had the idea about the blind spots of the system in which he worked.

What reason would any agency have to un-live Snowden? Any damage he has done was already done in HK and before; he has nothing more to reveal. It would only turn public opinion against the agencies.


True that it would hurt public opinion even further of the agencies if they were to take him out - but I thought he only revealed a portion of what he grabbed.


My understanding is that he handed everything off to the journalists.


Yes, for precisely the reason that he did not want to be the arbiter of what is released. That's probably why he's still alive. It was a good decision


There's quite a few Mickens rants, and they're all well written.

Edit: Link http://mickens.seas.harvard.edu/wisdom-james-mickens


Guess I have some homework- thanks for linking these!


Very entertaining. Thanks for highlighting this and introducing me to this guy's writing.


Probably one of the best reads I hard in a while, yes.


> Yeah, about that. I never ever ever successfully used the WoT to validate a public key.

If you ever installed a Debian package then you did. A long-term identity as "Bob Jones" might not be terribly useful - but that's not the kind of long-term identity we care about a lot in real life either. A long-term identity as "Debian release manager" or "Signatory on bank account xyz" or even "Wikileaks committee member" is a lot more important, and for those cases PGP becomes very useful.

> Then, there's the UX problem. Easy crippling mistakes. Messy keyserver listings from years ago. "I can't read this email on my phone". "Or on the laptop, I left the keys I never use on the other machine".

These are real problems. We should fix them. But we don't need a new crypto standard to do so! It never fails to amaze me how many people/organizations are like "I don't have the time/money/patience to write a high-quality OpenPGP libary (or a high-quality GPG frontend), but I'm perfectly placed to create a new cryptosystem from scratch."

> Your average adversary probably can't MitM Twitter DMs (which means you can use them to exchange fingerprints opportunistically, while still protecting your privacy). The Mossad will do Mossad things to your machine, whatever key you use.

This is pets vs cattle in the opposite direction. Can Mossad Mossad you personally? Yes, if you're a big enough target, but they can't Mossad everyone. Whereas the NSA can MitM key fingerprints exchanged via Twitter on an industrial scale.

> Mostly I'll use Signal or WhatsApp, which offer vastly better endpoint security on iOS, ephemerality, and smoother key rotation.

If you're using iOS you've already given up against state-level attackers. Anything actually encrypted (e.g. IRC with SSL) is more than adequate in that case. Most people don't need the jump up to PGP, sure. But it's important that the option is there for people that really do need it. It bears repeating that we know, from their complaints in leaked emails, that the NSA can't break PGP when used correctly. That's an extremely strong seal of approval for the most critical use cases for encryption.


You understand the difference between a lone hacker as an APT vs a state level threat as an APT.

That distinction is huge and chooing not to defend yourself against one or the other may allow for huge convenience gains at the cost what is to many a purely hypothetical notion of security.

Can we improve the tools and techniques we have enough so they are convenient enough to not have to make such a choice?


I think the big line in terms of what's practical is whether you're willing to trust the CA system or not. If you are - and I think if your threat model is a lone hacker then you can, compromising a single CA or maintaining an MitM requires a very high level of capability - then while doing SSL right and in a way that will let you detect MitM attempts is by no means trivial, there's such a wealth of messaging options available that I'm just not worried about this case. Use whatever, you'll probably be fine.

Once you step beyond that, there are no convenient options (or to put it differently, all convenient options come with risks that are more-or-less as big as the CA system). E.g. compromising Signal's central servers is probably not substantially harder than compromising a CA, and I simply don't trust that a system that does automated key exchange on first use (trusting the servers) will be able to avoid downgrade attacks by a compromised server. I think to a certain extent usability issues are inherent - if you are unwilling to trust any centralized identity services then you have to show key fingerprints and rely on the user to verify them themselves, there's no third option. At the same time I think we can and should do a lot better than current GPG.


What do you mean by "APT"?


"Advanced persistent threat"; see also https://en.wikipedia.org/wiki/Advanced_persistent_threat


> If you're using iOS you've already given up against state-level attackers.

Wasn't the recent apple vs FBI debacle evidence to the contrary?



It ended because FBI just cracked the device anyway. How is it contrary?


Insofar as that the FBI has to actually crack the device and don't have universal key of some sort. Newer version will (and already are) more secure. Its not perfect because this was 'just' the FBI and 'just' the legal way, but at least its something.


They might debate in public and make a deal in private, you may never know.


Apple was a part of the NSA's prism program.


PGP may have broken down for the author, but it's still used in a lot of places. For example, to communicate with our bankers at work, every email has to be properly encrypted and signed - or it goes into a blackhole. The only way to exchange public keys(initially) is in person. Once that is done, new keys are provided from that person, and the WoT expands.

tldr; it doesn't work for the author, but it does work for lots of individuals and even more companies with secrets to protect.


Oh, how I would love it if my (personal) bank/utility/isp would send me PGP-encrypted/signed emails, instead of emails saying "There is some updated information for you on our web site, please log in to see it".

But given the general competence demonstrated by such organisations, it's something I will never see.


PGP is also used very heavily on darknet on drug marketplaces.

OTR (or even Signal) is not possible there, so people stick to good-and-tried PGP.


I think darkweb marketplaces are a slightly different use case though. The requirements for a darkweb transaction are the ability to tell the vendor your address so they can send you illegal goods, while hiding it from the marketplace itself in case their servers are seized. A random PGP key with no real name and no verification is entirely adequate for this purpose - indeed, any kind of identity validation would probably be seen as a negative for such a situation.


He doesn't say that it's useless but that it's too much hassle to use even for someone that works in security.

It works in companies because you don't get a choice if you did most people wouldn't use it.


In otherwords, it only works in controlled environments, not out in the wild


Not to be glib, but this is true in much the same way as secure http. Really the only way to do it properly is to control the root key for your organization. The chain of trust starting with the vendor you got the computer from is bonkers.


It's pretty bonkers that you trust a computer vendor to control the firmware on your PC but not the CA chain. If Dell is determined to listen to your conversations, they can spy from the hardware, keylog beneath the OS, or literally listen through an embedded microphone.


People don't trust their hardware vendors because they're trustworthy, they trust them because they don't have any real choice.

If my preferred OEM offered me the choice between a locked-down opaque system, and an /equivalent/ system that is completely open and verifiable, I'd choose the second option every single time. I expect many would as well.


Sure. I'm just saying it's easier to verify the CA list than verify the hardware, and the hardware gives the OEM a superset of what they can do with the CA list.


Summary: The author decided that being connected to long term keys does more harm than good, partly due to the pressure to stay with potentially compromised keys due to the difficulty of starting over. The author will instead focus on secure IM using short term keys bootstrapped by social media accounts.

1) As others have pointed out, I really think the author is overestimating the effort required to compromise a twitter or other social media account. There are many accounts of this happening, including to people like Brian Krebs who knows he is a target and does everything possible to avoid the attacks.

2) Encrypted IM as the primary higher security communication channel seems to be a popular option these days, mostly leaving those of us who don't like IM to look at alternatives.

3) Briar (briarproject.org) is a promising alternative for messaging, although not ready yet and currently only targeting android, which has its own major security issues. Due to the focus on enabling offline, forward secure messaging, it can be used to defeat mass (network) survailance. It can also be used online and addresses some of the specific concerns raised.

4) General purpose computers are both handy and necessarily have security issues. Special purpose devices for more limited secure communication would help with many issues.

5) Secure communication isn't much of a goal; it is more helpful to consider specific threats. If you do something non-trivial towards a vague goal, it is easy to find a way it doesn't meet that goal when you feel like not doing it any more. I'm not sure what the author was trying to achieve with PGP in the first place.


Kind of off topic: I really like iMessage. Not to be an Apple fanboi, but it really is one of those things where It Just Works(tm). Encryption shouldn't have to be something the end user has to worry about; it should be transparent to the user while still being as secure as possible (HTTPS and TLS are a great example of this). For the user who cares about encryption, they don't have to configure anything. For the user that doesn't care, they still benefit.

By baking it into the OS, Apple ensures that anyone with an iDevice benefits from it. Compare that to having to download an app that may change depending on possible compromises. Anyone who's tried to convince a family member to use Signal, Telegram, etc. knows how much of a pain it is.


> By baking it into the OS, Apple ensures that anyone with an iDevice benefits from it.

But only people with iDevices benefit from it. I prefer Signal to iMessage because iMessage is iOS only, and I'm disappointed with Google for not including an iMessage equivalent with secure messaging by default.

> Compare that to having to download an app that may change depending on possible compromises.

If you mean what I think you mean, using iMessage will not save you/them from this any more than using Signal would, the only benefit to iMessage is that it's already installed on iDevices when you buy them and has secure-messaging enabled by default.

Which is still a step above Android currently - which has no default-installed secure messaging app at all.

I'm lookin at you Google!


To me, Keybase (https://keybase.io) seems to solve the "PGP has a bad user experience" problem correctly for like 90% of the population. You post proofs of your public key to known media (Twitter, Github, your website, etc.) which you control. These can be checked by anyone.

Even if the remote person doesn't know they are talking to you (as a human entity), they know they are talking to the combined online persona of all those accounts, which is all that matters for the vast majority of them. Yes, it is possible for all these services to collude and post false proofs, but that would be relatively easily detectable, and realistically not a concern for the majority of people out there, whose alternative is to not use any encryption. People who are really concerned can always fall back to standard PGP.

[Edit: Looks like I didn't read the article carefully enough, the author himself says he actually does use Keybase too.]


The combined online persona of those accounts is only as strong as their combined security. aka: Why would services need to collude when they can get the job done by ineptitude?

https://medium.com/@N/how-i-lost-my-50-000-twitter-username-...


I agree, a lot of services displayed a shocking amount of incompetence in that post. However,

a) The more proofs you have, the harder it becomes to force them. YC for example is one location, and is run (in my opinion) by very smart people where it would be hard to get a compromise.

b) My point is that this is an excellent alternative to not using anything in a way that is both friendly to people ("just make this post on [website]") and compatible with an older, better method of privacy (PGP) that people have been using for years.

It may not be as perfect as some of the more esoteric alternatives that people have suggested elsewhere in the thread (I'm not sure about this, can an incompetent phone company employee compromise some of the phone-based ones? I've come across a lot of incompetent phone company people), but much easier for the regular person to use.


This is a crazy story but I still think Keybase gives you a lot to defend here. You have to compromise all the accounts and change all the proves to actually be able to send valid messages to somebody else.

That is a tall order, even if you use the same email as a username everywhere. I use long random passwords and 2Fa on a number of the important accounts. I don't trust google and Facebook, but I trust them to have some interest in not letting accounts be compromised.

Also if somebody changes all the proves, they will all be new and a smart system should be able to detect this sort of stuff in the future.


For me keybase solves a lot of the WoT problem but not the day-to-day usuability problem.

The killer is lack of good x-platform e-mail integration + difficulties in key management.

I was hoping keybase might take that on as well, but AFAIK that's not on their roadmap.


I tried keybase. I find it a novelty. It's just as awful to use as pgp. So regardless of anything else it offers it's a dead end like pgp. It's designed by developers and security nerds I get it. But that's who it will stay with too.

It's not an attack on them just the reality.

People don't mind SSL because they don't have to do anything to get its benefits. It's transparent to the end user.

Is it perfect? Hell no. Managing veers is as bad as managing keys. It's a pita. But only has to be done on one end.

Even Phil Z. Learned this when he made zphone. It has to be transparent to the end user and have a simple way to authenticate the other end.


Keybase: Where I put my GPG keys which I basically only use to sign git commits for repositories that I'm most likely the only person that will ever lay eyes upon them but atleast I can verify that nobody pushed to them...


Is keybase open source?


I believe a lot/all of their stuff is, github repos here https://github.com/keybase


Thanks.


People who use PGP keys, can you give examples of your use? I'm genuinely curious. Who are you contacting, or who is contacting you? The author says he only receives 2 encrypted emails a year. Not only do I not have a PGP key, I don't think I've ever found myself in a situation where it was even an option to use one.


1) I occasionally use it to send secrets (password, certificates, private keys) to people when I can't meet them in person.

2) I share a file-based password manager with other people, that is basically just a collection of PGP-encrypted files with multiple recipients (managed using "pass").

3) I sign git tags, so that people know I have made the release.

4) I rely on the fact that all Ubuntu packages are signed and I will not accidentally install a package from an unknown source.

5) I have encrypted backups.


I've only ever used my PGP key for two purposes:

- to sign tags in Git for open source projects that I maintain

- to sign custom packages I build (and host) for Arch Linux


Haven't used it for communication but I've been using it as a means to store encrypted backups with a cloud provider.

If you're interested, check out duplicity at http://duplicity.nongnu.org/.


- All internal company emails. - Mailing Lists

Side note: using pgp with MacOs Mail is super easy https://gpgtools.org/ (the project is currently working on Sierra support)


Super easy until you upgrade your operating system and suddenly you can't use your mail client for months while the understaffed open source project is trying to reverse engineer whatever Apple changed...


Yeah... definitely not the best experience. But it's free software so I'm not going to complain.

I have been using enigma on thunderbird in the meantime, which makes me appreciate how well the native mail app functions.

Can't wait for that fix.


Sure, if anything I'd complain about Apple's mail client being closed source and incapable of PGP.


Mostly exchanging cat pictures with a good friend, keeping subject empty as it remains unencrypted. Once Enigmail is set up, it's basically harder to not encrypt.


a) signing scala libraries I release b) ordinary emails to a few of my friends c) facebook notification emails

I don't think I've ever used PGP when emailing a stranger, but I very rarely email strangers in the first place.


I was only thinking about keeping communication private. I didn't think about uses like signing libraries. Great example, thanks.


  - I use it as my ssh key
  - I use it to sign my git commits and tags
  - I use it to share credentials with people (e.g. "hey bob, what's the password to the shared XXX account" => pastes to me in IM encrypted to me)
  - I use it to encrypt passwords in my password manager


    1) Company communication
    2) Signing critical commits, tags
    3) Encrypted-mail-to-self
    4) Encrypting critical files
    5) Communicating about security vulnerabilities
I don't use it for backups (I use Borg), or for SSH (25519 keys).


I've used them at work, internally, to send production credentials to and from coworkers. It would obviously be bad if our email or chat service was compromised, but at least it wouldn't be a direct path to our prod servers.


1. Reporting vulnerabilities

2. Signing git commits/tags


Signing .deb packages. Debian and its derivatives are core users of gpg as it's basically a requirement to sign installation packages - if the user doesn't have the key in their trust store, they get a big fat warning when they try to install said package.


Debian package managers comprise a surprisingly large proportion of the strongly-connected set.


I will add what I wrote on a different thread

PGP is used very heavily on online drug marketplaces. You really can't use Signal or WhatsApp there - leaking too much metadata - and even OTR is leaking too much data.

PGP is quite good for this, and people use it for encoding their communication.


I've been thinking a lot about PGP and other encrypted messengers lately. It's incredibly hard to get a lot of people to agree on one messaging app besides default SMS. I wish there was an open source suite of tools for mobile/desktop that easily layered PGP on top of SMS/email experience and would fall back in the absence of keys. Perhaps bluetooth for swapping keys with friends. It's something that needs to be seamless enough that the end user can't tell the difference. I don't think messaging encryption will achieve mass adoption until something like that is built or built into mobile OS's.


Carriers would need to change the way they handle SMS, and everything a carrier does is subject to state regulations. And states seem to like clear text.


Why's that? I understand the message would increase in size because of the encryption, but I think it would be technically feasible now. Didn't even apple just introduce encryption into their messenger? My issue with apple's encryption is it's closed source and apple only.


Apple's iMessage just detects when the other party is using an iOS device and sends them an iMessage instead of an SMS. The GP is right; cell service is so aggressively regulated that there's no hope of carriers adopting a better standard, so the best we can hope for is that at some point mobile OS distributors agree on some open standard like the Signal protocol.


iMessage end-to-end encryption is not a recent introduction. It has been in place for several years, though I can't find the exact iOS version it was introduced in.


Recent to me is the last few years, technology-wise. Your timeline may look different.


Fair point. We are talking about a 5 year old product though...


That's pretty much how TextSecure used to work for SMS, but you still needed to get people to use it for SMS.


> ...bluetooth for swapping keys...

First rule of PGP keysigning parties - don't bring a computer. (Or at least if you do, don't turn it on.)


I have given up on the "web of trust" a long time ago for most of the reasons the author states. I think in order to work PGP would need to reach a critical mass of users that seems totally out of reach at the moment. Maybe if Google or Facebook starts issuing mandatory PGP keys linked with each account or something like that. Not sure why they'd want to do that though.

That being said there's still a lot of good and useful in PGP even if you ignore the WoT completely. I use it to secure my passwords, log into remote servers securely with SSH and I sign all of my emails with it, which is probably useless 99.9% of the time but at least it can be used in retrospect to prove that I did write those messages. I can also use it to sign git tags so that my code can still be trusted even if there's a breach in, say, github. I have a rather vast choice of GnuPG tokens I can purchase if I want an added layer of convenience and (hopefully) security.

Sure, WoT is simply unusable currently unless you're communicating mostly with hardcore PGP enthusiasts. That won't be enough to make me give up on PGP.


One perfectly valid way so solve the web of trust issue with PGP is to simply ignore the web of trust issue with PGP. Just stick your pubkey on your website and you are done. You just understand that there is a very low chance that any encrypted email from an unknown entity is actually from a composite entity. If you think that you are of interest to entities that have the ability to MITM your pubkey, you might want to mention the problem to potential unknown email senders as a disclaimer on your web page. In practice an entity with the power to MITM pubkeys is not going to use the facility unless they are really really sure as they are eventually going to get caught at it.

Things like Signal and Whatapp don't solve the web of trust issue either so you are not any worse off by using the head in the sand approach.


Years ago I worked with a guy who literally wrote a book about how to use PGP. I asked him if he could help me set it up and he said "I don't use it, it's too hard."


It's not too hard, at least not on Linux and for anyone with some familiarity with the terminal. Getting started can be done like this...

Install GPG:

  sudo apt-get install gnupg
Generate a key:

  gpg --gen-key
Export your public key as ASCII text and then post it somewhere publicly:

  gpg --armor --export $your_uid > your_public_key.gpg
Import my public key:

  gpg --import my_public_key.gpg
Verify my key by viewing my fingerprint (type fpr) and confirming it with me, then sign it (type sign):

  gpg --edit-key $my_uid
Encrypt the file message.txt and then send message.gpg to me via any medium:

  gpg --output message.gpg --encrypt --recipient $my_uid message.txt
Decrypt my response to you:

  gpg --output response.txt --decrypt response.gpg
I know that's pretty complicated for an average user, but it's not harder than any of the day to day work that we do as programmers. I have not used GPG in years though since my deep web adventures, so hopefully I didn't mess anything up and prove the point that GPG is too hard!


I use MacGPG to sign commits on GitHub but I have to admit that the E-mail portion has fallen by the wayside. (The last time I really used it was to E-mail a professor of a cryptography course for an assignment!)

While I also don’t know many people that use this for E-mail, it doesn’t help that virtually every OS update in the last 5 years has consistently broken it, taking sometimes months for a fix.

For those reasons, this needs to be baked into the OS to be viable. Only when somebody like Apple can install it by default, and make sure it works between updates, will it have the reliability and widespread availability that is necessary for success.


After all that, he was only getting two encrypted emails a year! Damn. That's crazy.


The deepest I ever got into active PGP/GPG was in college (where it is certainly easiest to have WoT key signing parties) and so far as I recall none of us ever really bothered encrypting anything to each other, we just signed a most of our emails as something of a prideful badge that didn't really mean much all things told. (To the point where at least one friend made a joke fake PGP signature that wouldn't verify to just prove no one was bothering to verify them either.)


I did exactly the same, signature as a badge of being one of "those guys". In the days of 56 bit "international edition" Netscape (or slightly thereafter, but still heavily influenced by that early wave of NSA-awareness), it felt like being way ahead of the curve. Kudos to the guy with the fake signature, in hindsight I must say that he truly nailed it.

One day however, my bank started offering transaction notifications by email, with optional PGP encryption. Suddenly there was real utility, and without any trace of WoT issues (key exchange over the same web frontend already trusted for actually transferring money, and the key in question is only for read-only messages). Other than that, the only encrypted messages I receive are the ones I send to myself as a convenient (because everything is already set up) form of secure cloud storage.



This has been my experience. The only "good" experience I've had with encrypted messages through email was a back and forth exchange I had with a fellow Keybase user where I manually copy and pasted blocks of encrypted text into/out of their web interface.


From my experience - the only PGP users I've spoken to were all on Keybase or interested in a Keybase invite. It was about 6 people for the entirety of last year - and 3 people this year...it certainly has a problem of "almost nobody uses it" but Keybase seems to have eased things slightly - or at least made it easier to discover people who also use PGP.

I see the two problems being "People don't bother with the clunkiness of using PGP when sending an email about what to pick up from the store" and "most users have no reason to talk to most other users".

I'm considering making it a point to message people with interesting Keybase avatars or social profiles tied to their Keybase if only to have an excuse to use PGP more, as silly as that might sound.


I have a keybase account and don't really use it. I like the idea, but part of the issue for me is attaching my "real name" to various online identities. I've used different types of pseudonyms over the years and do to poor opsec, some of them could be linked to me using the pseudonyms I use now. It's nothing illegal, but also nothing I'd like others to know about. So to attach my real name to keybase, I'd have to reestablish my identity in various places. Doing that, of course, removes some of the trust associated with the keybase model.

Additionally, and I realize this is tangential to this discussion, I use pseudonyms to somewhat reduce my privacy "surface", so to speak. If I take my twitter, HN, reddit, etc, etc. and say "this is me", you could build a pretty decent profile of who I am (politics, hobbies, profession, where I live and so on). That's a different privacy problem than keybase is trying to solve, so no criticism is intended, but it is a problem for me.


I believe one of the creators had said it is okay to have multiple accounts to keep identities separate or even to have an account for each identity. It does make it far less user friendly to need multiple accounts and multiple keys though and introduces a larger chance of making mistakes. Especially if it isn't that important to you (and it doesn't need to be!)

I use KB as an easy way for people to verify my signed messages - not necessarily for sending encrypted messages to other users. Mostly just a "This is me, you can verify it is me at Keybase easily - as long as you trust Keybase."

Doing that means users don't need to install PGP and know how to use it to verify that I am me. It isn't important now - or hopefully ever. By making a practice of it, my users expect it. if I am ever compromised, the malicious actor won't succeed in fooling my users as I expect at least a few will try and verify the message and will see it doesn't verify.

For myself, it's about being a solution for a "what if?" scenario than anything practical or even privacy-related. It's just the best psuedonymous way of proving identity within some level of reasonable doubt that I know of.


Keybase has clearly moved away from PGP. They want to use Saltpack whenever possible, NaCl based encryption. They want to solve the problem of multiple devices and not having to share the private key between all of them.

As far as I know they are working on a messaging app as well.


I admit my ignorance of saltpack and keybase's implementation of it, but don't they propose storing the key for you? That seems to create a trust issue, which is precisely what the author is complaining people don't pay attention to, trust.

On the other hand, perhaps the argument for this would be a "trusted 3rd party" model (a la S/MIME).


Well, you can have your GPG Private Key online if you like, but thats not my point. The new system moves away from having any sort of master key.

Rather every device has a new key, and they all sign each other. You can add new devices without old proves being invalidated.

See: https://keybase.io/blog/keybase-new-key-model and https://saltpack.org/

I would really like a solution using this stuff that is highly integrated with my mail client.


That's an interesting solution. Rather than having keybase keep your key, your devices are communicating directly to validate each other? I'm going to have to review this in more detail, thanks.


Currently you have to use a paper key to do it. You then upload a public prove chain. Its not where it should be yet, but the concept is pretty good.


They also have the KBFS which is very interesting. But yes, very clearly pivoted away from PGP and are working on other problems.


I don't think they are working on other problems, rather they realised that GPG has limitations and they can not solve it with PGP. The problem they are working on, is the same problem they started with.



I typically send 3-5 per week.


Was that user Filippo, by any chance?


Yes, surely that plays into his decision a bit. Some of us get get that every few hours from people are significant risk who really need to use PGP.


Stuff to secure has moved away from email while gpg stays primarily an email project.

The concept of a git repo means I don't need to sign anything, I'll just roll back if I pull the wrong thing. Socialization at work hasn't atomized enough that my only human contact with a coworker would be a gpg signed commit anyway.

The concept of software distribution being a tar.gz.gpg or verifiable md5 file is obsolete. Behind the scenes something like apt-get does sign things but how to integrate its list of keys with the end user is a mystery, its essentially magic. Besides it provides no security due to lack of MITM attacks in practice.

Can't use weak OS unless its behind a firewall and/or accessed thru the VPN at which case plain text is about as strong as the weak OS and the VPN or physical LAN security is "good enough". So plain text files on a networked fileserver perhaps relying on login and permissions but mostly on audits and fire anyone who does something naughty.

Everyone needs version control no one understands it but some (repeat, some) devs. The office workers have a similar relationship with encryption. Also with databases, given that the corporate standard database is Excel. Talk till you're blue in the face, it will change nothing. Good idea, not good here. If you think office workers need encryption, you're probably wrong, but it doesn't matter because they definitely won't listen anyway. They're too busy making closed siloed databases with Excel.

Can't use GPG to encrypt web traffic, there's a whole SSL https infrastructure for that. The cloud resource can be assumed to be completely government(s) (and hacker) penetrated at all times. Just a business decision to tolerate that. I could use encryption and signing to make sure the traffic isn't interfered with before the NSA logs their plain text version for all time, but why make their jobs easier?

Theft by monitoring data streaming along has pragmatically never been an issue, its always someone stealing entire (copied) mass storage units at a time or violating some higher level business protocol of "look don't touch" or even "don't touch" but its all plain text for various business reasons. So encrypting transmissions is a waste of time, VPN exists more for AAA not to prevent monitoring. Multiple governments and corporations have full access to both endpoints anyway.

Email is mostly (exclusively?) used for public mailing lists, and corporate receipt/alert traffic, none of which is benefited by encryption. Its been awhile since I had an old fashioned email conversation on email. Everyone loves texting and messaging none of which can use encryption usefully.

Anyone with physical access to the device can pown it completely, theres no point in a purely software solution you're just wasting time.

Very few people had an application in business or real life in say 1970 for nuclear grade encryption. Nothing has changed to 2017 other than its trivial to provide if someone needs it. Many people want it because its cool but it does nothing useful for them so its definitely a want not need.


> The concept of software distribution being a tar.gz.gpg or verifiable md5 file is obsolete. Behind the scenes something like apt-get does sign things but how to integrate its list of keys with the end user is a mystery, its essentially magic. Besides it provides no security due to lack of MITM attacks in practice.

The very first bootstrap is impossible to do securely in the general case short of building a computer from scratch, but you can do things that make it difficult to attack e.g. ask a bunch of different friends what the sha1sum of the latest debian release should be.

On the assumption that you manage to get a non-compromised version of debian installed you are secure even against MitM attacks; there's a chain of trust, every package has been signed by a key that has a key fingerprint claimed by a specific human maintainer, and new maintainers can only join after at least one maintainer has confirmed their identity against a government-issue document. Of course this doesn't make attacks impossible (e.g. rubber-hoses against one of the maintainers), but it makes the cost a lot higher.


Great - how do I get a non-compromised smartphone? :/

I'm kidding but I'm also serious.


Yeah. I fear the open-source side only ever catches up once something becomes commoditized, so the actual answer is probably that if you care enough you use a weird and slow phone built for this stuff (that Mozilla phone project?), you use whatever the current replacement ROM project is (I would hope one of Cyanogen et al would offer a carefully signed open-source build - I haven't actually looked), or you wait a few years.


Even if cyanogen was perfect, there's a closed source firmware running baseband processor with complete access to system memory, microphone, gps, and the network.


Usability is the "key" - it's hard enough to get people to use signal ("why do I need another messaging app?")


If they don't have Signal you could get them to either use Whatsapp, or only use the encrypted conversation feature of Facebook's Messenger.

They should have one or the other already installed if they're complaining about "another messaging app".


Very true, i do default to WhatsApp quite often, but not being in FB's eco system is something I'd like to avoid (especially given the the privacy bargain that comes with both)


This works fine until you get someone who only uses iMessage, and you use Android.


Exactly what finally convinced me to add whatsapp.


How is the author so seriously involved in PGP and only receive two encrypted emails a year? I'm basically just a dude who uses PGP because it's cool and I get tens of them. You just need one friend who also thinks it's cool.


Same question. I have one friend, smart and technical but just middle-of-the-pack when it comes to his non-domain-specific computer skills, and I got him using PGP regularly in our communications. He insisted on using webmail so we went with Mailvelope which mostly worked well up until it seemed to be losing his keypairs. He got pretty frustrated with that so now we're trying the ProtonMail approach and hoping for the best. To your point though, we've exchanged many dozens, perhaps 100 PGP-encrypted emails over the past year and I'm sure he's never heard of a "key-signing party".


Even Zimmermann, PGP author, has given up as this journal says https://www.scmagazine.com/phil-zimmermann-doesnt-encrypt-em...


A rather large misrepresentation of what was actually said.

>Zimmermann later clarified in a Motherboard article that PGP, acquired by Symantec in 2010, isn't compatible with his MacBook, and the technology never worked with any iOS device.


Dark Mail seems to be dead. Are there any efforts to make e-mail secure by default and e2e encrypted?.


A few projects that I'm aware of (disclaimer: I'm involved with LEAP):

- https://pixelated-project.org/

- https://www.mailpile.is/

- https://modernpgp.org/memoryhole/

- https://inbome.readthedocs.io/en/latest/

- https://leap.se/

Edit: btw, if you're in Berlin from 14-18 Dec, drop by the AME2016 unconf+hackaton https://github.com/mailencrypt/ame2016



It does not have a valid certificate just like any other custom domain hosted on GitHub Pages.


Then you shouldn't consider either GH pages for hosting a security-oriented project or a custom domain redirect.


That's a very nice list of links and overlaps with some idea I had, thanks a lot !

INBOME in particular looks to me to be (part of) the way forward, is there any way to follow progress ?


some discussiong going on in the ame2016 ML: https://lists.mayfirst.org/mailman/listinfo/ame2016

you can also track the repo: https://github.com/mailencrypt/inbome/


Most interesting e2e projects have abandoned email, specifically SMTP, as a secure messaging platform. I would look outside SMTP-based solutions if I were to start using a different project (assuming doing so is an option... I hope it is!).

My recommendation here is Signal: https://whispersystems.org/


A big problem is that a lot of this is driven less by people who have a genuine need for encrypted communication and more by people who want one on principle. And the latter tend to include the people who are more likely to try The Next New Thing.

And it also makes sense. A lot of these services are from companies that need to make money. And there isn't much money in the journalists and dissidents who don't have a bespoke solution.


Signal is nice, and I use it. But it's an instant messaging system. Email has different use cases.

I think what we're going to need is a new, non-SMTP protocol, which preserves all of the good things about email, while providing e2e encryption and (pseudonymous) identity assurance. I don't know enough to be involved in designing that protocol, though, other than saying what I want to see as an end-user.


pond has interesting properties, I think the next generation mail will have to implement some of those ideas.

and Signal/WhatsApp comes to replace (and kill) xmpp, not email. Another issue is the generational shift away from email, that is only for Spam and Work, more and more everytime...


Since Pond is hard to search for, [link attached][0].

[0]: https://github.com/agl/pond


What properties does email have that asynchronous messaging services (e.g. Signal) do not?


Cross-platform (Chrome web-apps don't count), Federated, Distributed, to name a few. The reason email is so entrenched is probably because of these reasons entirely. Being able to send a message from any provider to any provider certainly helped spread adoption easily.


There are protocol properties, and client properties. I think some of both are important.

### Protocol

* Easily federated

* Identifiers can be memorable/meaningful (unlike phone numbers) while still being globally unique (thanks to federation)

* Device independent (not tied to a phone number, can generally use the same account on different devices)

* Can contact people you don't know/haven't met (this is possible with Signal, but they'd have to publicly share their personal cell phone number, which is a no-go).

### Client

* Optimized for longer-form, less immediate messaging (folders, drafts, rich text)

* MIME attachments (Signal supports only a limited number of predefined types of attachments)

I feel like you could probably layer an email-equivalent on top of Matrix, but I'm not 100% sure about that.


The author of the article mentions Signal as well, but how do you handle communication from a laptop or desktop computer and/or with people who don't own an Android or IOS smartphone?


Signal does have a desktop application. I believe you can also register a Signal account using a phone number from a service like Twilio. I'm not 100% sure that will work with Signal desktop though.

https://whispersystems.org/blog/signal-desktop/


Signal in a chrome app can pair to signal on android/IOS. But I don't believe you can use chrome only. The chrome app just waits for you to pair with a phone and can't send/receive messages until you do so.


Why not S/MIME? Most clients support it, its stupid easy, and has had a lot of eyes on it considering its age.

Constantly re-inventing email encryption seems to be the problem here. None of them really make this stuff any better. Key distribution is still going to be PITA, but sticking with a supported standard makes the most sense.


No one seem to want to touch anything that already exists. Never heard of anyone thinking of redesigning UI/UX for a typical MUA or browser's keystore (throwing in BTBV option or whatever), although I still believe that must be possible. Everyone's off with their own proprietary non-interoperable (occasionally, "open") standards.

Also, _almost_ no client supports _any_ form of authentication and encryption on mobile, be it OpenPGP, S/MIME, PEP, SaltPack or whatever else. There are few, but that's not even remotely close to "most". Neither there is much choice of good desktop client software as well.


> authentication and encryption on mobile

The iOS built-in Mail application supports S/MIME. However, it doesn't support anything above TLS 1.0 for IMAP, curiously.

K-9 and the Android mail client don't support S/MIME, however.

On the desktop, Thunderbird and Outlook support S/MIME. So does mutt.


iPhone supports S/MIME out of the box. Android has 3rd party mail clients that do. Several desktop clients do, including Outlook.

Its not as dire as you make it to be. People just dont want the hassle and don't value their email privacy. Once they do value this, which they should considering all the breaches of late, then it'll catch on. I work with companies that have S/MIME internally for just this reason. Its completely feasible.


Doesn't cover "default", but mailvelope is a project working on making email for real people (i.e. webmail) easy to secure. I use it often.

https://www.mailvelope.com/


Requires private keys to loaded into the browser extension. No thanks! Google's E2E is the same, which is to, say garbage.


There's no way of doing E2E without a key in the client, per its very definition.


Google's End-To-End also seems dead.

https://github.com/google/end-to-end

I would say ProtonMail or the miniLock-based Peerio.com are now the most interesting projects for encrypted email.

EDIT: https://minilock.io/

https://github.com/PeerioTechnologies/peerio-client


Private keys accessible to the browser is the worst model. I don't think this was ever taken seriously at Google either...


It's not the worst model. It's still significantly better than what Google/Microsoft/Yahoo are offering.

And at least it's a model that can scale. When we'll get 10% of the email users to use (real) PGP, then we can talk about about switching everything else to it, too.

But I assume that's never going to happen. The only way PGP would reach those numbers is if Google actually finishes the End-to-End tool, and not only that, but then it actually makes it part of the Chrome browser and automatically asks all Gmail users at sign-in if they want to set-up End-to-End, too. That's the only way I can see PGP reach 10% of the email market. But even then I assume you'd argue it's still "browser-based" encryption. So I guess it's pointless.


Private key required in the browser is bad. However if we had a system where those people that want or care can key their keys offline (or on a Smartcard) while those that don't want.

Protonmail with SRP and 2FA where a attack has to pretty tricky stuff but you still get a e2e system is far better then what we have now. Its a far more involved attack to just look at your old emails, and its easier to detect.

We will never have a useable experience for all user if we do not accept compromises. I would prefer if everybody of my family was on some system like that, compared to gmx or yahoo.

There is a lot more you can do to protect the client from the server as well. Keybase is doing some interesting stuff in that direction.

We just have the change our assumption of what it means if a email arrives encrypted with GPG.

GPG is already not perfect and we should move away from it anyway, again, Keybase is offering some interesting steps in the right direction.

Steps on a long road.


> Protonmail with SRP and 2FA where a attack has to pretty tricky stuff but you still get a e2e system is far better then what we have now. Its a far more involved attack to just look at your old emails, and its easier to detect.

Can you explain how this is better than Google End-to-End? (Which isn't completed, I know.)

Or even how it's worth implementing over regular webmail in the first place?

Extensions are obviously not optimal, for the very same reason crypto in JavaScript is not good: timing attacks, cache attacks, optimizations, secure storage (in-memory and on-disk), random-number generation, verifiability, ....

Protonmail suffers from that too. Except it's also subject to key exfiltration. Without an exploit. The server simply promises not to send malicious JavaScript.

It isn't significantly more involved for an attacker who has breached their servers to send malevolent code. It's also not easy to detect.

> We will never have a useable experience for all user if we do not accept compromises.

There are usable end-user interfaces capable of featuring secure cryptography.

Desktop applications. Mobile apps. Browser extensions. Hardware tokens (U2F).

Those can be audited. Inspected. Users can freeze updates.

Trying to implement secure cryptography in JavaScript that is fetched repeatedly from an assailable server is just asking for trouble.

> GPG is already not perfect and we should move away from it anyway

Yes, but that doesn't mean ignoring the advancements that have been made in modern security.

The most appropriate response to securing email is still this: Don't, use a secure messenger; or use out-of-band encryption. It's likely to remain that way forever.


I have not look at google end to end.

The main benefit seems to be that to read your messages code has to be send to your browser. That is much easier to detect compared to compromise of the server now.

Plus it would have network effects that would benefit normal users.

In theory you could have clients with good guis and all that, these however barly exist and people simply want to use the web for this sort of stuff.

If your answer to solving a issue that billions of people deal with, then "Don't" is just not the solution expet maybe if there is a feature complete replacement.


Sending code to a desktop is easy to detect. Sending code to a non-web-based mobile app is easy to detect. Sending code to a browser isn't even detectable. In practice, no one audits JavaScript. If browsers offered the means to detect changes to cached content, security experts[1][2][3] would be less grim about webpage security.

Encryption done in untrusted JavaScript is security theater. If these websites offered privacy policies claiming to never read your data, like Riseup does, the end-result would be the same. Safer, actually, because deploying unnecessary crypto increases the vulnerability risks.

Recommending these websites is actively dangerous. Journalists and whistleblowers who take these claims of fraudulent security seriously, are going to be killed. Getting them to understand that email cannot be secured in-browser, is the entire point. That if you have important information to protect, you should be moving onto different approaches.

WhatsApp is literally safer than this, and it's more accessible.

[1] https://www.schneier.com/blog/archives/2012/08/cryptocat.htm...

[2] https://www.nccgroup.trust/us/about-us/newsroom-and-events/b...

[3] https://rdist.root.org/2010/11/29/final-post-on-javascript-c...



https://pep.foundation/

Commercial offerings for companies at https://www.prettyeasyprivacy.com/

(No, they are not, as far as I know “open core” – they are 100% free software.)


There is a reason they mostly don't exist, at least in the US. After Lavabit was so publicly shut down, it's going to be a losing proposition for anyone to put so much time and effort into it, just to then be given a choice between providing a backdoor or shutting down as soon as the government realizes they can't break your service.

If your going to start such a service, it needs to be in a country that will respect free speech. And there aren't many.


Lavabit didn't do E2E or at least stored the user's keys; since the owner did in fact have the keys needed to unencrypt the Snowden e-mails (and sent them to the FBI in paper using infinitely small character size, apparently).


IIRC, they did not store user's keys. What they wanted was for Lavabit to turn over their SSL key, which would have compromised security for the entire service, rather than just targeting Snowden specifically.

Presumably they did not have the ability to turn over the emails themselves, so they wanted Lavabit to provide a vector to do a man in the middle attack.


> I never ever ever successfully used the WoT to validate a public key.

TOFU, anyone?

From: Werner Koch wk at gnupg.org

Date: Fri Dec 4 14:06:49 CET 2015

Subject: [Announce] GnuPG 2.1.10 released

Hello!

The GnuPG team is pleased to announce the availability of a new release of GnuPG modern: Version 2.1.10. The main features of this release are support for TOFU (Trust-On-First-Use) and anonymous key retrieval via Tor.

...


"Yubikeys would get exposed to hotel rooms."

Can someone please elaborate on this?


I took that to mean he left it in a hotel room while going somewhere, exposing it to any number of maids, possibly including those of a malevolent nature.


As far as I understand, it is not possible to extract a private key from a Yubikey [1]

[1] https://www.yubico.com/2012/12/yubikey-neo-openpgp/


I think his security threat model was nation state when he really needed APT, annoyingly persistent teenager. There are elements of what he did that I'd do if they were automated. But if the NSA, Mossad, Hacking Team want to get me, they're going to get me. And it would only be vanity to say they are even thinking of me.

So this is the perfect being the enemy of the good. I need good privacy and good security. I'm not going to torture myself for perfect privacy and perfect security. Cut to the last scene of The Conversation where Gene Hackman's character tears apart his office ripping down the walls to find the bug and the eavesdropper taunts him. Who is torturing whom?


Yeah. A friend of mine said it best: "if there is a conflict between convenience and security/privacy/anything else, convenience always wins." PGP didn't stand a chance.


gpg is promoted as a kind of swiss army knife of privacy, but its interface always puts email first. If you use it for something else, you must paranoidly guard every command so that it doesn't by mistake publish information about your privately used keys, for example.


Very true. A high-quality library (that was actually built as a library) for OpenPGP would be a very valuable thing to have.


Who here is using keybase to manage your public key being distributed? It seems like a great idea since your key is tied to your online identity


Keybase is great for managing my keys, but I haven't found any services that make it easy to natively use them. They've sat unused for the ~2 years I've had a Keybase account.

That said, I have about 25 invitations. Anyone want one?

Edit: Not true. I used my keys once to sign a GitHub release for a novelty project nobody uses.


The #1 problem, beyond the usability issues, is that most users still just do not care. They are willfully ignorant (a mentality which is actively celebrated these days in some influential countries), and cannot be convinced in the value of privacy or security.

I have tried without much success to get people to move to ProtonMail. I have tried without much success to get people to move to Wire messenger. (And incidentally, the author mentions Signal and WhatsApp... I wonder why he doesn't use Wire?)

So without consumers who care, the only audience for PGP and other security focused tools are the geeks who too easily tolerate bad interfaces.


Me too. The reason I gave up on PGP is I couldn't find anyone that would willingly use the service through email. With Signal I can find people that use it.


I've been using PGP for mostly 1 year and yes, I agree. I still send around signed email, but never received one encrypted so far. Generating keys and backing them up is tricky and I have probably made mistake in generating or storing them at some point in time. Is there a step-by-step good-practice on how to use PGP?


I gave up 3 years ago: http://blog.mostlydoing.com/2014/03/how-to-securely-store-pr... (I don't trust myself to securely store my private keys)


I feel the pain as well. I'm not ready to make the same jump however.

I really do support Keybase, there I see the potential to solve many of the issues. I would love some better integration into the E-Mail ecosystem, but sadly its not there jet, and its not there focus.


My ongoing concern is around where reasonable alternatives fit in.

- Securedrop, where users upload messages and they are automatically encrypted

- Darknet services

- Businesses where users communicate via desktops.

Signal/ etc works in a different space and doesn't provide an alternative to these.


There isn't a lot to unpack in this article. Most is set-up; explaining how connected he is to a community that is enthusiastic about PGP yet doesn't apply secure operations in practice.

Then there is the main complaint:

> I haven't done a formal study, but I'm almost positive that everyone that used PGP to contact me has or would have done (if asked) one of the following:

> - pulled the best-looking key from a keyserver, most likely not even over TLS

> - used a different key if replied with "this is my new key"

> - resent the email unencrypted if provided an excuse like "I'm traveling"

I haven't done a formal study either, but no one I know that uses PGP would do any of these things under any circumstances. PGP works fine for myself and the group of people I know that use it, because we adhere to security protocols that are just as important -- if not more -- than using PGP itself.


That is definitely not my main complaint, and I suspect it might have caught your eye because it's the one that wouldn't apply to you (which is absolutely possible).

The article is about the flaws of long-term identity keys, and it would stand even if there weren't UX, adoption, or security protocols adherence issues.

Maybe try to unpack a bit more :)


You're right, long-term identity keys are bad. Long-term identity keys are not a concept mandated by PGP, they are a result of how people use PGP or how PGP is implemented in a third party app.

No part of PGP requires you to use a key more than once. This phenomenon is a result of a consensus of people deciding on a terrible operations strategy over a long period of time.

Edit: this comes to mind https://gist.github.com/grugq/03167bed45e774551155


Agreed, I link to that Gist exactly in the "Moving Forward" section ;)


I must have missed that.

I don't understand what the point of your blog post is, in this case. You understand why PGP is needed and how it's important, how to use it correctly, etc, yet you "give up" on it because no one you know uses it correctly.

Is that it?

By the way, how are you going to send someone a 5GB file securely using Signal?


Encrypted in any way, hosted anywhere safe, sending the passphrase via Signal, done.


Have you tested your group of people?


?What experiences have folks with ZixMail https://www.zixcorp.com/why-zix/email-encryption


"One click encryption is one click too many" - Bruce Schneier


On a related thought, using a 'secure' (or so they say ?) email provider à là protonmail is just secure if you send your email to another protonmail user.

Problem with services like that is they omit to tell their users that email is not E2E, and sending from protonmail to gmail will just disable the benefits of using protonmail.

So yes, if you are trying to send encrypted email to a GMAIL user, your only way is to use GPG. Or to get them onboard of protonmail and the likes. It's... impossible.


When sending to a non-protonmail account, you have the option to encrypt the message contents-- recipient has to open a link and enter the password. I think decryption is done in the browser in that case too (not 100% sure tho)


That is correct. The recipient can reply from that webpage as well, however you can't have a conversation there (your replies to their replies don't show up in that page, they have to get new URLs in their email).


From what I remember, Protonmail is more about protecting the client and contents of the inbox from snooping than protecting the email content in transit between their servers and other people's clients. Supposedly, they use in-browser encryption to encrypt traffic between the browser and webmail server, and encrypt the mailbox so access to the server won't provide easy access to your account's contents. They also claim that hosting in Switzerland reduces the risk of server-side government interference.


I don't want to be a defender of snake-oil email providers but I think a 'secure' e-mail provider hopefully does more then just GPG to other users.

They hopefully have a good secure authentication system (Protonmail just added SRP and 2Fa) and hopefully have correct policy for things such as DKIM, SPF.

They hopefully also keep up with security updates and stuff like that.

Doing these things seems normal to many, but the reality is that many E-Mail providers have non of those things.


The proposed solution is to use Twitter to use Signal or Whatsapp. This forces your correspondents to use one of these centralized services, and also to run proprietary software to be able to use them.

I'm also irritated by GPG and OpenPGP's shortcomings, but it still gives people a way to contact you with reasonable security and without having to use specific proprietary services.


There was not a single mention of CA-signed S/MIME certificates. That seems quite an oversight.


Never heard of yubikeys, pretty cool. Thanks.


So, to distinguish, there's web-of-trust things and general pgp/gpg encryption (and signing) UX. Both of these are pretty abysmal for non-technical users.

I don't think "muggle" users would be interested in the web of trust at all, and I doubt they can really handle it all that well. But I'm a pretty technical person (MS in Computer Science, PhD in a different field), and well:

http://keyserver.ubuntu.com/pks/lookup?op=vindex&search=eric...

I don't think the web of trust really matters- would you look at that and say, "hey, this Eric fellow has a 2004 key for his gmail, and an unrelated 2016 one, I'd better not trust him." Doubtful.

Honestly, in today's internet (I know, dangerously political...) I think there should be a stronger move toward broad-spectrum encryption of all emails. I actually generate trust with most of my email correspondents independently, but I sure would like to encrypt my communication. Signal is good for shorter messages, but email is still email.

It isn't like I have state secrets in my email, but I do have stuff I don't want random government snoops reading, especially if they're bulk-collecting. Furthermore, I think it's important for more people (even people who don't need it) to encrypt their correspondence, so we can provide cover for people who really do need it. Journalists and dissidents won't stand out as much if everybody is encrypting.

To that end, I think pgp / gpg is still pretty cruddy for UX. There are decent solutions for each platform, but nothing really good, and my friends / family aren't likely to use a mail client or webmail that's not at least almost as good as gmail/inbox just because I am worried about privacy.

I've recently moved to protonmail for most mail, since it has a very slick user experience and I want to know it well enough to be able to recommend it to other people. However, protonmail doesn't let me have my private key (or its analogue - I'm not 100% sure how things really work, but I have a public key that I can give to other people, and those other people can send me encrypted stuff from off-platform. I just can't reply in the same fashion). That means if I lose my protonmail account, woops, I can't read the emails you sent me encrypted to my @protonmail.com account, even if I get the emails. This is more of A Thing now that you can set up protonmail as your MX, and therefore get emails addressed to domains you control on the platform - if I ever swapped my personal domain around, I'd like to have the key.

So, for end-to-end encrypted simple messages, signal is great. I just wish protonmail did interop, and then I'd really recommend it to other people.


I guess this is a good opportunity to review Matthew Green and Moxie's posts on PGP, too:

https://blog.cryptographyengineering.com/2014/08/13/whats-ma...

https://moxie.org/blog/gpg-and-me/


I really like what moxie wrote here. There is a big difference from the best thing and the thing that can actually exist in the real world right now.


The 'deficiency' of PGP lay in the leaky nature of the computer itself. How do you maintain your all important private keys? On disk? In memory? USB? All are leaky from the get go. And this, I posit, is the problem gents.


Why is WhatsApp trusted? Isn't it proprietary and controlled by Zuckerberg? What am I missing here?


9/10 end users just don't understand that security and convenience are inversely related.


I think they understand that quite well, that's why when security gets in the way they just find a way bypass it.

Security has to be usable, if it's not usable then (almost) no one will use it.


Systems like Signal and WhatsApp show that that's not necessarily true to the degree of previous solutions.


I think that the analysis is a little more involved than that. Roughly, I'd say that at any given point you can make "trivial" tradeoffs between security and convenience. However there can be some groundbreaking advances in one that don't cost you on the other. And then that point you may be able to do a "trivial" rebalance if you'd like.


> Systems like Signal and WhatsApp show that that's not necessarily true to the degree of previous solutions.

I dunno if I'd really believe that until either company is willing to put a rising bounty starting at say $10 million USD for a real* break. Then we'll see.

*not due to user carelessness or social engineering


They only manage that by compromising on other fronts. That is not an option with E-Mail.


it is mainly the tools that are blamed even in this post.

nobody wants to make a better sks keyserver or gpg cli. why? because you dont get no fame or money from it.

filippo show me your gpg commits.


What's this? But seriously what is this? I use GnuPG and am quite fond of it. I've a pubkey.asc up my website, and I use gpg to encrypt some files and my backup tarballs. PGP is not a mail tool, it's for encrypting strings. Yhis guy does not know what it is and cries for having done much ado for nothing. Key signing parties? I certainly have better things to do. Just generate a key and put it on mit key server, call it done.

And he complains he don't get encrypted mail. So what, I'd rather be happy. It'll be useful when it'll be w/ email, and has many other uses otherwise.


He clearly knows what it is. He isn't ranting against PGP he is ranting against the WoT. If you got an encrypted email from Linus Torvalds, how would you verify it was him?

https://news.ycombinator.com/item?id=12296974


I wouldn't. If I'm going to get an encrypted mail from someone, I've alread verified the sender.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: