The terminus of your argument admits to using ROT13 as a cipher. Surely there's some attacker ROT13 bars. While you're at it, Base64 the "ciphertext". There are professional pentesters that can't recognize Base64 on sight! And if you want to get really tricksy, swap the Base64 alphabet around. That'll hoist that fruit up higher.
In reality, no credible cryptosystem accepts the flaws I'm talking about.
No, it doesn't. My definition of 'low hanging fruit' means anything that can automated to the point of mass data consumption and processing within the scope of a private corporation's resources.
ROT13 clearly falls within this formula.
AES 256 client side encryption, even in JS, over a 256 bit SSL connection with 32 character base key-phrases, uniquely generated for every message, expanded into the full key and exchanged and stored in a secure fashion separate from the location of the encrypted data itself falls outside of this scope currently.
That's true, but I'm not sure everyone should be quite so defeatist as you with respect to implementing crypto. It has and can be done, after all, it just requires care beyond "hey, my PHP script outputs some HTML." The same goes for many, many aspects of programming: design, object lifecycle, memory allocation, error handling, etc. Programming is hard, and implementing crypto is no different. The difference is that mistakes are not immediately obvious and that they can be severe. But that, again, is the case with many things.
If there's anything I've learned so far after observing the security world for a few years, it's that implementing crypto is very different. I think that this mindset from programmers -- "crypto is just like programming" -- is what causes them to constantly, and almost without fail, screw up their password storage systems, their authentication mechanisms, their crypto functions, and pretty much everything else security-related.
Software development in general tolerates (and in some ways even encourages) "acceptable risk". Programmers don't often try to prove that their functions are correct and free of any potential errors. In security-related programming, you can't take the same approach.
Implementing crypto is very, very different. If you're "iterating" the design of your crypto functions, you're doing it wrong.
The reason people screw up password storage is because they don't know they are supposed to think about it. If they flipped the "oh, this is important" switch, the result would be better. But at the end of the day, their bosses are asking them for shiny widgets, not a secure backend. So they are doomed to fail.
The same goes for the non-security things I've listed. The problem is much deeper than getting crypto wrong -- most programmers today get everything wrong. That's why I think if you're the type of person that can think about programming, it's not too unsafe to implement AES in Javascript and use it. You know what the weaknesses of Javascript-based crypto are, and you know how to implement crypto. In that case, why not do it?
Remember: most non-crypto software is massively incorrect. If we can trust people to implement crypto, why can we trust them to be programmers at all?
I think I see where you're coming from, but I still disagree: I've seen way too many examples of otherwise competent programmers still stuck on, for instance, the notion of using salts with fast hash functions for password storage. Hell, MtGox had a post just the other day about their all-new triple-salted SHA256 password storage! Somewhere that day, there was a faint groan from the dismally small set of people who are knowledgeable in password storage and are interested enough in Bitcoin to have read about that.
The difference between bugs in non-crypto software and bugs in crypto software is that bugs in crypto software can have much more severe and far-reaching consequences. So, while I might trust a programmer to write decent non-crypto software, I would prefer not to trust them with writing crypto software.
edit: actually, there's more to it than that, on second thought. Crypto also requires a greater depth and breadth of expertise. The math knowledge required for general programming is trivial by comparison; about the worst it usually gets is vector-based math, or simple calculus, or big-O notation. But to understand crypto well enough to implement it correctly requires a much greater knowledge of mathematics -- something which most programmers don't have.
> You know what the weaknesses of Javascript-based crypto are, and you know how to implement crypto. In that case, why not do it?
Because I (the rhetorical "I" in this case) know what the weaknesses of JavaScript-based crypto are. :-)
Do you know where the weaknesses of Javascript crypto are? I'm not sure I know all of them, and I know a couple that don't have solutions that don't require browser plugins.
Where has Javascript crypto been done successfully? I'm not aware of any application with a sound use of it. There are a few narrowly-tailored experiments, all of them backed by SSL, that don't have obvious flaws. But the best-known examples, like what Meebo did a few years ago, were deployed despite glaring security flaws.
The killer for me has always been #7 on Nate Lawson's list: Auditability. How do you tell that your browser is using the right copy of the code to do the crypto?
This is an excellent article on this subject. The author (Nate Lawson) is thorough in his argument. His conclusion is "I am certain JS crypto does not make security sense."
Could anyone explain what's the use case for encrypting text on a web page using JavaScript? I don't understand how this library is useful except for situation when used in Chrome extensions like the one used by LastPass.
If this lib is used on a page to decrypt/encrypt user data before sending to the server, theoretically it's possible for the host to steal private key simply by injecting a JS code that copies user's private key.
It's worse than that. Among many other things, it's possible for any attacker with an XSS bug in any component of the DOM of that page (cached or fetched fresh) to steal keys. Modern browsers provide no way to verify the whole JS runtime to ensure that no function that your crypto depends on has been backdoored, but every JS implementation allows functions to be overridden.
It's the worst possible environment to implement crypto in, and you should never do it.
XSS is in your own control (as a site owner/developer), and it is less likely to happen than the user machine being infested with malware to begin with. And the malware infestation threat has never stopped anyone from advocating native (non-browser) crypto, right?.
The difference between normal software and browser javascript is that browser javascript is effectively re-installed every time you visit a web page. People install new software packages less than once a week.
This is a nerdy argument I'm not particularly interested in hashing out again, so you're welcome to the last word.
Your argument here looks like, "other things have vulnerabilities, and JavaScript has vulnerabilities, so it is equivalent to other mechanisms."
If that is your argument, then the conclusion is wrong, because JavaScript implementations are vulnerable to everything that everything else is vulnerable to, and then some.
There is nothing that a JavaScript implementation actually protects you from.
That's not my argument at all. My argument is that the probability of a well-designed site having XSS (p1) is much less than that of a user's machine being infested (p2). When you start using both, as you say, we end up with compound probability of a breach p1+p2, which is strictly worse than p2, but if p1<<p2 we are not losing much to begin with, and it maybe justified if we're gaining as much or more elsewhere.
In other words, global optimization may require local pessimization.
E.g. if we gave the user ability to store secret data without the key ever leaving his possession, he might be more likely to use the service and stop storing his secrets in a notepad file. However if we don't guarantee that the key will not leave the user's possession, the user may decide not to use the service.
> ...and it maybe justified if we're gaining as much or more elsewhere.
Except that, in terms of real security, we aren't. That's the whole point. You're right in describing it in the way you did, but then you get to this point where there's this implicit assumption that JavaScript-based crypto gains you something. Which, maybe, leads into your next point...
> E.g. if we gave the user ability to store secret data without the key ever leaving his possession, he might be more likely to use the service and stop storing his secrets in a notepad file.
OK, but this is a different problem. The correct solution here is a true client-side app or a browser add-on (and even then ... ehhhh). Otherwise -- and this is a point that I don't feel like I can emphasize enough -- you are giving your users a completely false sense of security. You're selling a service by saying that "we'll store your secrets for you and you don't even have to ever give us your key, so it's more secure than keeping them in a notepad file", but that's demonstrably false.
> However if we don't guarantee that the key will not leave the user's possession, the user may decide not to use the service.
Also a different problem. You can't sell security this way. All you're doing is taking advantage of people's ignorance. If everybody understood the security risks of JavaScript-implemented crypto, and you sold the same service -- "we use JS so you never have to share your keys" -- then the users would decide not to use the service!
To reiterate:
1. Server-side encryption is a real protection from accidental or malicious data leaks (db dumps);
2. SSL is a real protection from MITM and eavesdropping (mostly, with caveats);
3. If the server software gets compromised, you're pooched no matter what.
So, again: JavaScript solves none of these problems.
The only thing that JavaScript does, is add more problems.
>You're selling a service by saying that "we'll store your secrets for you and you don't even have to ever give us your key, so it's more secure than keeping them in a notepad file", but that's demonstrably false.
Now, aren't you getting carried away? Physical loss of a laptop will compromise the notepad file, but not the client-side encrypted data. Malware on the laptop will compromise the notepad file 100%, but any crypto-based solution will only be compromised if it's been used during that time. Fire/flood/theft will deprive the user of his secrets altogether.
Seriously, are you claiming that a notepad file is more secure than a server-based storage with client or server encryption? That's an extraordinary claim.
1. That notepad file can be encrypted on the laptop; and
2. "Web app" programmers so often get security really wrong. This entire thread is just one of a huge number of examples of that.
Honestly, any hosted solution that relied on JS for encryption or authentication would encourage me to keep storing secrets in text files on my laptop.
Here's a fun, easy way to understand why hosted solutions are rarely a good idea for storing private data:
- Make a chart with four columns;
- Column 1 is "Unencrypted local storage"; column 2 is "encrypted local storage"; column 3 is "remote storage with JavaScript encryption"; column 4 is "remote storage with SSL + server-side encryption";
- Under each column, write down a list of every method you can think of that that particular system could be broken. Take your time and be creative.
- Cross out any methods that all of the columns have in common.
How is this any different from a XSS bug resulting in an attacker stealing the plaintext password from a form?
Also, given that the client and the server trust each other and communicate securely, doesn't this just get reduced to the probability of having these bugs and nothing more?
If the client and the server can trust each other, then JavaScript encryption literally adds no benefit.
Think of it this way: your browser contains some areas whose sole responsibility is to verify the authenticity of a remote server (SSL), and those areas are completely inaccessible from the DOM. So, basically, as long as you can trust your web browser, then you can trust the connection.
But those protections don't exist for JavaScript. An attacker could compromise your server, rewrite your JS, and you'd never notice. (No JS signing capabilities in the browser; no TOFU/POP style architecture.) An attacker could use an SQL injection on your CMS to leave a comment that uses XSS to modify your JS encryption while it's running, and you'd never notice. You could be on an unsecured network and someone could MITM rewrite your JavaScript in-flight, or, in the cases of bad JS crypto implementations (which almost every single one is), simply use a replay attack at their leisure.
There are all kinds of really neat ways to attack JavaScript.
It's true that there are also almost as many ways to attack any other authentication/encryption system, but the point with attacks against JavaScript is that JavaScript adds no extra security at all -- it is at least as vulnerable to any of these as anything else is -- and it adds a false sense of security, and it is vulnerable to things that SSL is not, and in some of the scenarios, you will get absolutely no indication that you've been compromised.
Suppose you want to store confidential data with the server. You trust the server owners to be the good guys, but you also know that data breaches happen (e.g. equipment theft, FBI seizures, backup tape leaks), so if/when the breach happens the thieves will have made off with encrypted copy of your data. The only way your data is jeopardized is one of the two cases:
1. The people maintaining the service turned out to be not trustworthy, and have been harvesting the keys all along.
2. The hackers took control of the server and injected code to harvest the encryption keys for long enough to catch you in the net.
If you look at the relative probabilities of these two events compared to a straight-up data leak, we're looking at orders of magnitude reduction in risk. Most people who did the right thing in the past are in the habit of doing the right things, so you can lean on the host's reputation - we've been doing that for thousands of years. Code injection on the site is much less likely than the data leak, and it is a lot less fruitful for the attacker as he would have to sit there undetected and wait for enough users to punch in their keys.
No, it doesn't. JS implementations suffer from all the same flaws that everything else does, plus a few more, and it offers no protection from any flaws that anything else is vulnerable to.
"You can never be completely secure" is not a good justification for security theatre.
Well not really, because if you can verify the scripts that are loaded (and side-loading JavaScript would have to be a targeted attack that compromised either the server, or the Google API, or a malicious extension), then it guarantees end-to-end encryption for the user as opposed to having to send plaintext over the wire.
How exactly would you implement a browser based crypto solution?
That, plus a notification to the user if the signed JS has changed since the last time it was loaded, with SHA hashes of the scripts (and other data) stored directly in the browser in a way not accessible from the DOM.
Since in-page scripts could still on-the-fly rewrite the functions of loaded JS, they would have to be provided read-only by the browser, or there would have to be some kind of out-of-DOM API for working with them.
My hope would be that we'd see a handful of signed libraries provided and reviewed by cryptographers and that they wouldn't change very often because it would be a pain in the ass when they did.
But: I am not a cryptographer or even a qualified security expert. There is probably a good reason not to do it this way.
How is this a justification for JavaScript crypto? It might be a justification for some form of crypto, but if that's the case, why not stick with SSL + server-side encryption?
Either will work for the given scenario. The only difference I see is that with client-side crypto you can't accidentally write the key into a server log file, where as with server-side crypto it is possible.
But then again at some point you will add some logging to client-side code as well, so the point will be moot - you will have to sanitize logs at point of production.
Another thing is that it makes much better messaging - "the encryption key never leaves your machine".
Why do you think the other method does not have the same weaknesses? If the browser is compromised the key that users enters into the web page is leaked, regardless of whether you use client-side or server-side crypto. The fact that in client-side crypto in addition to the encryption key the code itself could have been tampered does not add anything to the threat... does it? The key is already presumed to have been leaked, and things can not get any worse than that.
As far security is concerned, the two are equally (in)secure. And client-side crypto still has the advantage that key management is much easier to explain to the user, thus the app is more likely to be used.
> Why do you think the other method does not have the same weaknesses?
Trivially: JavaScript encryption is vulnerable to MITM attacks that SSL is not. The only solution to this is to deliver your JS over SSL, but browsers will complain if you try to mix https and non-https elements in a page, and you might as well serve the whole page over https anyway. If you do that, then you still don't need JavaScript. (As one example; I know there are others but I just came in from the yard and my brain is a little melty right now.)
> As far security is concerned, the two are equally (in)secure.
This is a statement which flies in the face of the recommendations of some very smart people in the security field, some of whom have written extensive posts which have been linked to in this thread.
Even if I didn't (barely) know enough to evaluate your claims on my own, I could conclude that you might be right ... but I wouldn't bet on it.
Look, this subject has been discussed to death and nauseum, including here on HN. There have been extensive essays written about it. If you don't want to believe me, I'm OK with that. Go read their stuff, and see if it makes more sense to you.
If, after reading all of that, you still think you have some new insight in the field that everyone else has overlooked, and you can show why JavaScript is "equally (in)secure" to SSL + server-side encryption, then make your own blog post or essay about it. Make it good and in-depth, link to it from HN, tell everyone about it. I'd be happy to read it.
Otherwise you and I are just going in circles and not getting anywhere.
Update:
I should probably note that I haven't looked at the implementation, and there's no way I'd do as I mention above without having complete control of sources.
Be careful: if you are using a javascript library to encrypt data, you get protection from eavesdroppers, but not from man-in-the-middle attacks. If I can alter data that you send/receive over the wire, I can simply modify your aes function calls to xor the data with a predictable sequence that I generate. The only way to prevent this that I have thought of is to store the site (which has presumably already been acquired over a secure channel) locally on the client's machine, and make AJAX calls over the encrypted channel for fresh content.
I don't think you know what you are talking about. If your authentication is implicit in the decryption of the messages, replaying that data will accomplish nothing.
For example, if I share a secret "password" with my server, I can do this in javascript using any symmetric cipher:
ToServer: "My name is oconnore"
ToClient: cipher(iv1, "(C1) Hi oconnore. Use this! iv2=rng()"++msghash, "password")
ToServer: cipher(iv2, "(C2) Thanks, give me my data please!"++msghash, "password")
ToClient: cipher(iv3, "(C3) <Some data>, and iv4=rng()"++msghash, "password")
...
If an attacker repeats any of these messages, the client/server will discard them based on the counter. Of course, for real use you would need to use public key crypto to avoid storing passwords in what is essentially plaintext, but I thought this would illustrate how replay attacks are a non issue.
Of course, as noted before, all this is irrelevant if you are sending your code over a channel vulnerable to MITM...
You are trusting the server, you are trusting the quality of every line of code on the server that has a hand in generating the page content (since any DOM corruption flaw will let you trivially backdoor the encryption), and you're trusting every third party server from which HTML or JS content is sourced.
But if you wanted to, you could verify the authenticity of the code. If you're just sending plaintext off into the void, then you have zero control whatsoever. You could also open up Wireshark to verify your plaintext isn't being transmitted anywhere.
How exactly are you verifying the authenticity of the code? By looking at the Javascript code? Even if you're among the 0.0000000% of users who could look at JS AES and know if it's backdoored, how do you know if any of the Javascript functions it's calling have been backdoored? Similarly: what does it matter if you can see plaintext on the wire? The attacker siphoning off your data can re-encrypt it for themselves. You helpfully provided them a library to do exactly that.
When was the last time you checked whether your compiler wasn't producing backdoored code? How about those chips on your motherboard? Perhaps an instruction or two are being executed without your consent?
You are just being paranoid at this point.
Like someone mentioned, using this to encrypt state secrets? Stupid idea. Using it to provide some level of trust that the server never sees the plaintext data? Pretty reasonable.
Why are you protecting against the server never seeing plaintext data? In case the server is compromised? If the server is compromised, then so is your JS!
The difference between traditional compilers, chips on motherboards, and so on and so forth, are that you aren't downloading and using a fresh copy every time.
If it were possible to securely store some JavaScript in your browser (trust-on-first-use), or otherwise verify that the JS hadn't changed since the last time you used it, and if that JS weren't accessible from the DOM, and if that JS were well-written and used standard practices and had been reviewed by cryptographers, then maybe it would be OK to use.
SJCL has a bug in their RSA implementation. We're using a good bit of their code with a few changes for our web client. The idea being that we don't want to store passwords, so the webclient stores an encrpyted private key and everything sent to the server must be signed.
The users id is a sha256 hash of their public key and all we keep are the public keys.
Working so far in FF and Chrome, not even trying it in IE
Yes, exactly. I would love to have cross device SSO and authentication. Having proper cryptography available in the browser (either built-in or through a extension) would make that easier to make. As a bonus: phishing resistant.
http://www.fourmilab.ch/javascrypt/
Moreover, Walker's code is public domain and he discusses security aspect, whereas the op link appears not to care about such things.