The issue with security is that you're not competing with other secure solutions, but with the convenience of unsecured. I've read that even journalists that face real risk communicating without pgp don't bother because it's difficult to participate in something that requires others to also participate. There's also the issue that email is only one (slow) form of digital communication.
Cryptocat (ignoring security for a moment) is an attempt to solve the accessibility problem in a way that actually works. Widespread adoption works well with some central aspects - and if we're talking about dragnet avoidance for most people then this is probably a reasonable compromise.
If Google generated public/private key pairs for each gmail user tied the keys to each account and then used the public keys to encrypt all email (taking a similar approach to using real OTR for google hangout) then all google->google communication would have a layer of protection from unwanted ISP monitoring. Granted you're still trusting google with the private key and warrants or some request could still reveal it, but you'd actually have wide scale use of the thing. Facebook could do something similar for their chat and then you'd have most of how people actually communicate covered across multiple devices in a way that protects against sweeping surveillance.
I'm pretty ignorant about most of this - am I missing something obvious that would prevent this from working? Obviously you're stil trusting the companies, but we were doing that anyway.
Cryptocat is an attempt to solve the accessibility problem that essentially handwaves away many key problems in cryptographic security. It's a system that is simple for users to use because it's simplistic, and is thus less secure even than some other very-simple systems.
For instance, OTR (as implemented by IM clients) is an example of an extraordinarily simple cryptosystem (I'd argue too simple) that at least provides for a notion of persistent keys.
That makes sense and I think my comment was a little ambiguous. It wasn't so much an endorsement of cryptocat itself, but the idea that using a centralized system that supports encryption may be a reasonable compromise to protect against indiscriminate collection by third parties, and something I think existing companies that handle most communication already could generally implement.
> The issue with security is that you're not competing with other secure solutions, but with the convenience of unsecured.
True, but that doesn't mean the convenient and insecure apps have to totally dominate the market. I'd be willing to bet there are plenty of people who want true, reliable security, and are willing to take on a little hassle for it.
> I've read that even journalists that face real risk communicating without pgp don't bother because it's difficult to participate in something that requires others to also participate.
The same could be said of the early days of email: It's only useful if other people are also using it. But we overcame that chicken and egg problem. The same could happen for PGP. It's a cost-benefit equation: People need to be sufficiently worried about privacy, and the friction of PGP needs to be sufficiently low. There must be some tipping point.
> Cryptocat...is probably a reasonable compromise.
Not if it can be cracked with the processing power of a mere desktop computer. I've heard claims to this effect. What are the use cases for a sort-of-secure app like Cryptocat? Certainly not hiding from sophisticated adversaries. So it's just about protecting your data from casual users then. But in that case, I'd argue that even your basic instant messaging client is adequately secure, in that a casual user doesn't know how to play man-in-the-middle.
> If Google generated public/private key pairs for each gmail user
Either you're encrypting server-side or client-side, in that scenario. If server-side, you're not much better off than just using HTTPS. Google still has your plaintext. So if you trust Google, just use HTTPS. And if you don't trust Google, then encrypting server side is pointless. Client-side encryption isn't currently viable: http://www.matasano.com/articles/javascript-cryptography/
I agree with your first two points and if the complexity of using pgp were reduced to that of simply using email then I think it'd become more widely used.
I didn't mean to imply that cryptocat itself is a reasonable compromise (I don't know enough), but that a centralized system that implements encryption for its users might be which leads into your fourth point.
Yes javascript client-side encryption is fundamentally flawed in its current form from targeted attacks, but I'm thinking of untargeted and immediate wide spread adoption.
If you're collecting everything from everyone in perpetuity the risk for abuse I see is when someone becomes interesting to the government they can query their data set and use it against them.
I'd think there would be a way to use client side encryption by these companies to protect a user's data until they become 'interesting' at which point they'd have to use more fundamentally secure methods (which is how it currently is anyway).
Seems like the easiest way to get the most people generally protected from abuse.
Is using https enough? If that's the case then it seems like most communication would be protected anyway. I got the impression that this wasn't true - not because I don't trust google, but because of something else (it seemed google genuinely didn't give access to everything yet people made comments about no digital communication being secure: http://www.youtube.com/watch?v=vt9kRLrmrjc).
So yes, it's true that JS crypto provides a higher degree of security than no crypto. And security is always about degrees, not absolutes. But you also have to consider that a user may develop a false sense of security if they're told their data is "encrypted." This does put the user at risk. So if you're going to use JS crypto--which is almost certainly unsound--you take the risk of misleading users in a potentially dangerous way.
To give a concrete example, let's say Google adds JS-based PGP support to Gmail. Suppose that, in general, it works. Inasmuch as Gmail delivers properly encoded PGP messages to your recipients, and it can read PGP messages that are sent to you. But suppose further that Google is somehow compromised. Maybe through technical means, maybe through social engineering, maybe through legal pressure. And then a malicious JS payload is delivered to users, hidden somewhere deep in the page. This payload allows PGP messages to continue being sent and received. But it also backdoors you. Maybe by creating an alternate version of every message encrypted with the attacker's key.
Unfortunately, current clients are not at all equipped to detect if this is happening. For the browser to be able to participate in a truly secure crypto system, it would need to have the most critical parts built in, not provided by websites as JS.
> Is using https enough?
It's generally believed to be adequate for protecting against a man in the middle. It doesn't help you if your computer or the server is compromised. Whether you trust Google or not is your choice. The way I see it, every entity that stores data will eventually have abuse, a leak, or a breach. So if you're at peace with that risk, then HTTPS is enough.
Cryptocat (ignoring security for a moment) is an attempt to solve the accessibility problem in a way that actually works. Widespread adoption works well with some central aspects - and if we're talking about dragnet avoidance for most people then this is probably a reasonable compromise.
If Google generated public/private key pairs for each gmail user tied the keys to each account and then used the public keys to encrypt all email (taking a similar approach to using real OTR for google hangout) then all google->google communication would have a layer of protection from unwanted ISP monitoring. Granted you're still trusting google with the private key and warrants or some request could still reveal it, but you'd actually have wide scale use of the thing. Facebook could do something similar for their chat and then you'd have most of how people actually communicate covered across multiple devices in a way that protects against sweeping surveillance.
I'm pretty ignorant about most of this - am I missing something obvious that would prevent this from working? Obviously you're stil trusting the companies, but we were doing that anyway.