Hacker News new | past | comments | ask | show | jobs | submit login
Challenge HN: Break my in-browser crypto for $1000
88 points by absherwin on May 29, 2013 | hide | past | favorite | 152 comments
After reading arguments about using cryptography in the browser, I wanted to see how easy it was to fail in practice so I built a simple app: notecrypt.appspot.com. It encrypts notes using SJCL and stores their encrypted form on the server and enables retrieval with the same credentials.

This approach is uniformly as good as or better than a browser app not using in-browser crypto from a security perspective. As daeken noted in a discussion a few days ago, it provides protection against a demand to produce the notes. He also claims that "For all other cases, there are a million simple means by which you can break it." So I challenge anyone on HN: Show me such simple means.

The instructions for claiming a reward are contained in a note stored in the app's database. To help you along, I'll navigate to any URL you send me while logged into the site. If you'd like to try your hand at finding a vulnerability in the AES implementation used, I'll send you a snapshot of the database. If you want to try injecting code, I'm happy to use a network you control if we can arrange the logistics though I won't promise to log in if you can't provide a valid SSL certificate :). If you need assistance to demonstrate a reasonable attack, ask. Post your attacks below or email my user name @gmail.com. The first successful attack wins. Usual disclaimers apply.

You might argue that this doesn't prove anything about JavaScript cryptography since additional security is used. That's true. If this survives, it doesn't provide evidence that the in-browser cryptography is sufficient; it only suggests that it can defeat some additional threats against an otherwise secure application.




You posted the link like this:

"...I wanted to see how easy it was to fail in practice so I built a simple app: notecrypt.appspot.com"

If you copy and paste that into your browser, your browser will make an HTTP request. Your webapp will send back a 302 redirect with an HTTPS link, but if I'm an attacker running sslstrip (http://www.thoughtcrime.org/software/sslstrip), that won't work.

What's more, you don't set HSTS headers:

$ curl -i https://notecrypt.appspot.com/ HTTP/1.1 200 OK ETag: "ibqapA" Date: Wed, 29 May 2013 18:59:37 GMT Expires: Wed, 29 May 2013 19:09:37 GMT Cache-Control: public, max-age=600 Content-Type: text/html Server: Google Frontend Transfer-Encoding: chunked

...so anyone that types "notecrypt.appspot.com" into their browser in the future will continually expose themselves to the same vulnerability.

At that point I can just modify the JS that gets transmitted back to the browser such that it doesn't actually do any encryption.

As always webapp-based JS cryptography is reducible to the strength of SSL. If I can break SSL, I can break your JS crypto, so there's really no point in doing the JS-based crypto.

How do I collect the $1000?


SSL vs JS crypto is kind of missing the point.

There are use cases that SSL does not address. This contest is obviously silly, and I'm not defending the app, but I take issue with the "break SSL = break JS crypto" argument.

sslstrip, while a legitimate issue, is closer to phishing than an attack on JS crypto. It's based on tricking the user, not breaking the protocol. I'm not trying to downplay it but I don't think it's fair to equate that with an attack on JS crypto. In an audit, I would point out the things you mention.

Consider:

1. The user knows how to look at their address bar and verify that they're on HTTPS.

2. The server gets raided or changes ownership, and the new "owners" are untrusted.

Now the inevitable counter argument is, "if you don't trust the server, how do you trust the JS that they are sending you?"

My retort would be:

1. Such attacks wouldn't be retroactive. Data is only compromised from the point that you execute the modified JS forward. If it gets owned or raided, and you don't log in, you're still safe. If you trust just SSL, you don't gain this advantage.

2. There can be varying degrees of trust. I can trust that the site operators won't actively attack me, but not trust that the data is unreadable by anyone. Your webhost could expose your data either out of malice or carelessness. It's a significant escalation for your webhost to hijack your site and start changing the code in it.

I'm not trying to suggest that JS crypto is safe or that this is a solved problem, but SSL does not provide any solution at all here. JS crypto provides a poor solution, but a much better one than only SSL (though of course it must also be used with SSL).

Mega is an example of where JS crypto might have some value. For users to be charged, Mega has to not only get raided, but the feds have to keep it running long enough so users can be attacked with the malicious JS, after which their files can be decrypted. That is far better protection than the first iteration, where a raid meant they already have everything they need to charge everyone, and they just have to start digging offline.


> [..] but I take issue with the "break SSL = break JS crypto" argument.

Respectfully, I'd suggest that this is not the argument.

The argument is that breaking SSL = accessing the data the crypto protects. Crypto is a means to an end...which is protecting sensitive information. moxie decidedly did not "break" the crypto, because he didn't have to.

It's a bit like a bank vault with the world's strongest door, but when it was installed they put in a 4' x 5' window next to the door. The vault door (aka the crypto) is fine and unbroken, but I still can get access to your money because your security system has a weak point.


But it's not a given that you can "break" SSL. The problem that moxie is highlighting actually has no solution (even HSTS isn't a solution) and relies on users to not notice that they have been downgraded to HTTP.

Unless moxie also wants to say, with a straight face, that twitter and gmail would be better off disabling HTTPS, I don't think that claim is fair.


> I can trust that the site operators won't actively attack me, but not trust that the data is unreadable by anyone.

Who would it be readable to if not the site operators?

> Mega is an example of where JS crypto might have some value.

So it might be useful to someone, somewhere as a legal subterfuge.

> For users to be charged, Mega has to not only get raided, but the feds have to keep it running long enough so users can be attacked with the malicious JS

Good to know that it's secure in the absence of an attacker.


Well you're being snarky, but I'll address those points anyway. Please understand that I'm not advocating for JS crypto or ignoring its other problems.

> Who would it be readable to if not the site operators?

SSL offers no protection against the following:

Amazon AWS employees. Azure employees. External attackers with read-only access. Employees of the website with read-only access. Employees that get curious and want to snoop, but not badly enough to inject code. Anybody with access to the backups. Other tenants on the same cloud who notice that memory or storage wasn't properly zeroed. Someone who breaks into the office and physically steals the server.

> So it might be useful to someone, somewhere as a legal subterfuge.

If you acknowledge that it's sufficient to prevent evidence recovery, and you have taken the position that is the only practical scenario under which that property has any utility, you're not a very creative thinker. I'll try to expand your mind a little. I tried to use an easy real example.

There was a Google "Site Reliability Engineer" who got fired for snooping on young girls chats and harassing them. He or others in his position would not have been able to deploy new gmail code to perform an active attack, but reading offline storage is trivial. If Gmail sent down Javascript that did the crypto before they got it, this employee would not have been able to do that. I'm reasonably confident that rogue Google or facebook employees aren't going to target me, insert active attacks, and get away with it. I'm not very confident that nobody there will ever passively snoop (don't even need to speculate, both companies have had public incidents at least once).

http://www.informationweek.com/security/vulnerabilities/drop...

"Dropbox on Monday acknowledged that its vast store of files was left open to the world on Sunday for four hours as a result of a bug. During this period, any account could be accessed using any password.

The flaw, a software bug that rendered the service's authentication mechanism non-functional, only took five minutes to fix, once it was discovered.

As if on cue, Wuala, a competing cloud storage service operated by hard disk maker La Cie, published a blog post on Tuesday stating that Dropbox's problems wouldn't be an issue if files were encrypted by the client. "Encrypting your files before they are sent to the cloud makes Wuala inherently more secure than solutions that rely on server-side encryption," the company said."

http://news.cnet.com/8301-1009_3-57578766-83/vudu-resets-use...

> Good to know that it's secure in the absence of an attacker.

No, I'll give you a few minutes to read it again though ;)

Again, to reiterate, JS crypto has major issues, but it's untrue that you can simply use SSL instead. These are different problems. JS crypto, in its broken state, offers some improvement for some real-world scenarios that SSL does not even attempt to address.


> If you acknowledge that it's sufficient to prevent evidence recovery

Any snake oil is sufficient to thwart some attacker somewhere some of the time.

Browser Script Crypto adds a lot of complexity. It also adds a metric crapton of insidious counter-intuitiveness to the security model.

These are real and serious engineering drawbacks to BSC. In order for BSC is worthwhile in a given situation, you need to show that any benefits it may bring outweigh the drawbacks and do so better than other solutions.

This is never the case in practice, because BSC is vulnerable to all the same attacks as the web browser.

> There was a Google "Site Reliability Engineer" who got fired for snooping on young girls chats and harassing them.

Solution: Publicly fire the guy. Don't log the chats.

> If Gmail sent down Javascript that did the crypto before they got it, this employee would not have been able to do that.

I think you're assuming that based on the belief that this engineer would not have been able and willing to conduct an active attack. What is your evidence for this belief?

> I'm not very confident that nobody there will ever passively snoop

Snooping is their (Google/Facebook) business model. It's just a question of whether or not their internal authorization process was followed.

BSC doesn't solve the problem of the insider threat. It may make it marginally more risky for some insiders some of the time, but there are already an unlimited number of ways to make life more complicated for insiders (legitimate or not).

> The [Dropbox] flaw, a software bug that rendered the service's authentication mechanism non-functional,

As a rule, if the authentication is broken, the encryption is too.

> [La Cie] stating that Dropbox's problems wouldn't be an issue if files were encrypted by the client

Only maybe if the user chose a strong password and there was literally no way for users to recover their data if they lost it.

I'm not against client-side encryption. I'm a fan of PGP and Truecrypt.

> JS crypto, in its broken state, offers some improvement for some real-world scenarios that SSL does not even attempt to address.

I can think of one or two such scenarios, but not many. Mr. Dotcom may well be the exception that proves the rule; his primary adversary is the US DOJ. It will be very interesting to see how this plays out.


Cool, so you're being kind of disingenuous in your arguments by ignoring my underlying point (SSL does not do these things) and my disclaimers (JS crypto has huge, real problems, I'm not an advocate for it), and instead just replying so you can invite moxie and tptacek to circlejerk on twitter apparently: https://twitter.com/marshray/status/340239501446217728

Instead of make more comments for you to take out of context and condescendingly straw-man, I'm going to withdraw from this effort.


> Cool, so you're being kind of disingenuous in your arguments by ignoring my underlying point (SSL does not do these things)

I am not being condescending I am being completely serious here: maybe I just don't understand your underlying point.

What are "these things that SSL does not do" that Javacript-over-SSL does do?

> and my disclaimers (JS crypto has huge, real problems, I'm not an advocate for it), and instead just replying so you can invite moxie and tptacek to circlejerk on twitter apparently: https://twitter.com/marshray/status/340239501446217728 Instead of make more comments for you to take out of context and condescendingly straw-man, I'm going to withdraw from this effort.

Oh give me a flippin break. Anybody who clicks that link can see that it a reply to someone.

Stand up for your point with rational arguments or admit your system is pwned by sslstrip.


Here it is worth pointing out that despite losing his "contest", badly, Adam Sherwin decided not to pay Moxie Marlinspike.


It seems like there are two strains of thought in this little drama:

1) This is just stupid because JS Encryption is only as strong as the weakest link in the whole website (SSL, Third-Party Scripts, etc.)

2) Actually completing the challenge, which I assume means hijacking Adam's session and retrieving and decrypting the reward instruction note.

It's okay for people to focus on point one, I suppose, because otherwise innocent people may get hurt if developers actually come away with the conclusions that JS Encryption is safe.

As for point two, I think it's okay to provide an intellectual proof of success in lieu of actually doing it, but what does that mean? Doesn't it hinge, in Moxie's case, on the condition Adam states, If you want to try injecting code, I'm happy to use a network you control if we can arrange the logistics though I won't promise to log in if you can't provide a valid SSL certificate :)?

Moxie would have to MITM notecrypt.appspot.com which I assume is allowed because of, I'm happy to use a network you control. However, what about the further condition, though I won't promise to log in if you can't provide a valid SSL certificate? This is unclear to me. A valid certificate to what? notecrypt.appspot.com? Moxie's tool is called sslstrip, not sslreplace, so I don't think it can provide a valid SSL Cert, in which case he would not succeed. Perhaps he would strip the SSL from the JS file only, but that might yield a mixed content warning.

What am I missing?


+1. If he's paying $1000 so he can personally witness how something like this works, rather than reading about it, it's a fine learning experience.

I'll totally back away if he tries to make this into a product.


He's not paying. He contacted Moxie Marlinspike privately, and then Moxie posted on Twitter saying he wasn't paying. So he's not only wrong, but also a weasel. A WEASEL. I said it!


I haven't paid anyone yet because no attack has yet met the conditions. As others have noted, a valid SSL certificate is required for this to work.

Moxie suggested that I acquire a certificate for a similar domain to meet that requirement but that doesn't meet either the normal definition of valid (If a show requires a valid ticket to be shown at the entrance, will one from last night do?). He did claim that producing a forged certificate is possible and I have no reason to doubt him. However, he noted that he nor anyone else would produce one for a reward of this amount.

We were still corresponding when he made the tweet in question. I ultimately asked him if he believed that a certificate to another domain was a valid certificate and his reply didn't address that point but discussed instead why he and tptacek are frustrated by these kind of contests.

I'm sorry I seem to have offended you.


You haven't offended me.

You did however launch a "contest" to demonstrate the soundness of using browser Javascript cryptography to protect user secrets, and made exactly the kind of slip-up that trivially demonstrates what a bad idea browser JS crypto is: to wit, your application couldn't even protect its own pages, let alone use them to safely deliver crypto code; you provided an ambiguous address and didn't properly lock the server down to SSL, so you couldn't even rely on users getting to your app under SSL.

Your app wouldn't have demonstrated any of the security value (largely nonexistent) of browser JS crypto even if it had been careful about SSL hygiene. But you spared Moxie the trouble of making that case.

Somehow I doubt Moxie cares too much about the $1000, but: you screwed up, lost the contest, and then (from what I can see) weaseled. The real point isn't "pay Moxie $1000"; it's, "don't try to run contests like this".


You can say it's a dumb contest, just like I will, but you can't say he weaseled. Unless you would like to first admit to a reading disability.

It says, plainly, he wouldn't click unless there was a valid SSL cert. Do you know what sslstrip does? I know you do, so stop playing dumb.

"I won't promise to log in if you can't provide a valid SSL certificate :)"

Apologize to absherwin and admit you were wrong, or it is you who is the weasel.


I can live with your belief that I'm a weasel. I think Adam Sherwin owes Moxie $1000. I stand by my opinion.


That aside, is there a reason you're naming and shaming? It seems a little malicious. You know that your announcement is going to come up under search results for his real name.

If that wasn't intentional and you hadn't considered it, it would be classy to delete that. Granted "absherwin" is not totally anonymous, your derogatory proclamation attached to the proper form of his name is a bit much. He's misguided, not malicious.


Naming and shaming who? Unlike you, Sherwin has never been anonymous. Like me, when he criticizes people online, he signs his name.


> Naming and shaming who?

Again, stop playing dumb.

Type his Firstname Lastname into the search bar at the bottom - your "name and shame" post is the only result.

> Unlike you, Sherwin has never been anonymous.

News flash: Moxie Marlinspike isn't Moxie's real name either.


Moxie Marlinspike is not hiding behind an alias, and neither is Sherwin. I didn't sleuth his name from anywhere; it was public at least since last night, on Twitter. Presumably, the guy who starts a contest under an HN nick with his name in it and then emails numerous people from an address with his name in it is not trying to hide his identity.

Nobody is playing dumb with you. I'm not afraid to be wrong in a debate with you; I just haven't had the opportunity to be, yet.


Yes, my name is public. I don't really care about you mentioning it. What I care about is my name being mentioned in a way that implies my dishonesty which is all lawnchair_larry originally argued. His argument had two components: You're making a false accusation and it's more harmful because you're using my real name.

You chose to reassert that you believe your accusation is true without responding to the specific points raised and turn the focus of the discussion to the subsidiary point.


I am not making a false accusation. I understand that you believe I have, but we both know I don't agree with that. You and I also both know why I haven't delved into whatever specific points you think need delving-into.

You introduced your name into the discussion, presumably because you're confident enough to sign your name to your opinions and arguments, which is admirable. It is thus not a valid argument to suggest that acknowledging your (public) name is a malicious act. But the obvious invalidity of the argument clearly didn't stop 'lawnchair_larry from making it!

If it helps you any, you can peruse the rest of my comments on HN. Whenever it's reasonable, I try not to use nicks and handles; I call 'patio11 "Patrick", I call Paul Graham "Graham" (I don't know him in person), &c.

Have you noticed how, despite you repeatedly insinuating that I'm knowingly making false statements about you, I'm not huffing and puffing about it? That's because the huffing and puffing is dreadfully boring and teaches us nothing about anything whatsoever. I think we can all agree this nitpicky little subthread isn't teaching anything either, so I'll concede it to you and 'lawnchair_larry, and respond to you elsewhere on the substantial points.


Look, the fact is you're directly attaching Firstname Lastname to a derogatory claim, signed by a respected expert, to the permanent record of the searchable internet over something rather petty.

Is it worth dragging someone through the mud over? He said he didn't appreciate it, just edit it out. It's more important than "being right."


There is an extreme lack of detail here for me to come to that conclusion about another human being.

Did Moxie think that his description of what he would do was sufficient? Adam offered to enter whatever environment was offered.

Has Adam now refused to participate in that? Has he participated in that and Moxie broke it? If either of those has happened, I've seen nothing written about that on this HN page, on Moxie's blog, on Moxie's twitter feed, or on TFA. Maybe there is some big part of the discussion I'm not seeing.

Maybe it's not worth Moxie's time. That is perfectly fine. There's lot of things that I could do but don't because they're not worth my time, but I don't get to say I did them.


I truly appreciate your attitude. As you can imagine, having my integrity questioned is quite hurtful. So before I address, your point, thank you. Your comment means a lot to me.

Moxie did provide a more detailed attack description. I had to correct an iptables command and modify sslstrip to alter the JS but otherwise his instructions were complete and correct. Per the original instructions, he should have had to configure the proxy but that's purely a function of who does that labor so I happily did the small amount of extra work.

Once I completed the attack, I learned as I suspected from reading about sslstrip, the SSL certificate was removed. I thanked Moxie for his efforts and praised his work but informed him that it didn't meet the initial requirements. He argued that it works in the wild and that I could acquire appspot.cc and a certificate for that. I replied that if he wanted to conduct a convincing enough demo that I would be fooled into clicking, I'd consider that sufficiently close to being valid. Alternatively, he could produce an SSL certificat e that the browser would accept as valid for that domain. I agreed that a user in the wild might not be as cautious but that in agreeing to route my traffic through a malicious proxy, I recognized I was giving an attacker a significant advantage and that's why I made a valid SSL certificate a requirement in the initial post.

He replied back that any LAN can be malicious so he disagrees with my assessment and that he didn't trust I wouldn't throw up more unreasonable objections and that I should pay.

I replied outlining exactly how I would evaluate any alterations to the MITM he's proposing. It was at this time he posted to twitter. I heard about that and emailed him and he denied he was accusing of of dishonesty but only noting that I had declined to pay the reward tptacek thought he was owed. I remained concerned that it wouldn't be taken that way and unfortunately tptacek has proven my fears correct.

In any event, I asked him if he believed his approach genuinely met the valid certificate requirement at this point because I wanted to assess whether he was he felt my rules were unfair or that I wasn't abiding by them. He declined to reply to that question.


If it helps you understand the objection I have to this whole exercise: I think the whole contest was weaselly. You designed a challenge that stipulated away the simplest and most reasonable attacks on the system, and created an objective for the contest that would have been equally annoying to achieve had you used repeated-key XOR as your cipher instead of AES, and suggested in your promotion for the contest that its intent was to demonstrate something about cryptography. I think the commenter who parodied the exercise with a contest about a post-it on his computer hit the nail on the head.

But, since I didn't take the contest seriously, I didn't take the time to verify that you'd actually set up your server properly. Moxie, on the other hand, did. If I put $1000 on the line, I might have taken the time to ensure that the SSL connectivity I had set up for my server actually worked. You didn't; you left a vulnerability on your server that any security audit of any SSL-only service would have flagged as "MUST FIX". Moxie not only flagged the vulnerability but explained it to you and provided exact steps to reproduce it for you.

Here is is worth noting that $1000 probably does not buy enough of Moxie Marlinspike's time to compose the emails he seems to have sent you.

Instead of conceding the point --- that, in setting up a contest that obviously depended on your SSL/TLS connectivity actually working, you had made a material error that made the contest easy to win --- you raised an arbitrary objection: the judge of the contest was permitted to take arbitrary steps to verify in exacting detail which SSL certificate was presented, at least on the first connection if not on subsequent connections (as you note, Moxie didn't get that far with you). The same objection would have worked if you'd used a self-signed certificate; Moxie would have said, "any MITM could interpose a new certificate" and you'd have said "oh, but I would check the fingerprint of the certificate against the certificate pinning list I maintain in my head". Here, for obvious reasons, Moxie gave up.

Nobody cares about your money. The problem is that in "staking" $1000 on this contest, you've created a perception, exclusively among people who don't know much about cryptography, I'd add, that SJCL stapled to a web form is a viable mechanism for building a secure system (or at least, something more secure than just HTTPS). This perception is wrong, and it's aggravating that the $1000 gimmick reinforces it among precisely the people who most need to be made aware of how wrong it is.

I don't think you're a dishonest person, in the sense of, "I would avoid doing business with you". I don't know you at all, and certainly not well enough to judge your character. I do think you've fallen victim to message- board- lawyering, something (uh) many of us have problems with, and are reluctant to concede any point that harms your argument. That, sorry to say, is weaselly behavior. If it helps you any to hear it, I'm sure someone could find someplace on HN where I too have been weaselly in debates.


> (or at least, something more secure than just HTTPS).

It is demonstrably more secure than just HTTPS. Even if only slightly. Because HTTPS doesn't even try to do those things.


I appreciate your responding thoughtfully.

Regarding Moxie, I noted in the original post and immediately after his first post that he'd need to provide a valid SSL certificate. He asserted one wasn't necessary and I assumed perhaps he had some other trick to make it appear as though he had one so I invited him to email me details. His attack didn't do that and it turned out he had interpreted the requirement for a valid SSL certificate to mean that he couldn't present one the browser would flag as invalid. I'm sorry if you or he feel that I wasted his time.

The SSL connectivity for the server did work. The only way to defend against the attack Moxie posted would be to modify my browser. The exploit Moxie used depends on the fact that if one doesn't type https:// the browser will request http. You could argue my error was abbreviating in the posting but that wouldn't change how users type it. The best defense against this is HSTS which comes in two flavors: The standard version is applied after first visting the site. Would you have said that invalidated the attack? The attack would work just as well on a browser with cleared cache or running in incognito mode.

The only real defense which, to my knowledge, only exists in Chrome is the HSTS preload. I suppose I could have noted that I'd modify my browser to have it on the HSTS but that doesn't effect real world security either.

That said, I am extremely curious how you would defend against SSL stripping in the wild. This seems like a potentially devastating attack. None of the banks that I checked are on the preload and many don't seem to use HSTS at all. What defenses would you have considered sufficient so as not to consider this trivially exploitable by an SSL redirect vulnerability that effects almost the entire web?

Your argument above effectively reduces to: The web is insecure so you always lose and putting in a disclaimer to ban such attacks is weaselly. The SSL vulnerability is a huge problem but that doesn't mean understanding the security of the rest of the system is worthless.

I never said I'd check the fingerprint of the certificate against my memory. Moxie used that retort against me after his attack produced a session that was obviously not over https. I explicitly denied that that was one of my criteria. That said, at least in chrome, appspot.com is pinned.

The irony is that this does show that SJCL provides a modest security increase over plaintext in the case of broken SSL. It requires a per-site attack to be constructed ahead of time vs. simply reviewing the plaintext for all sites after the fact and picking what's valuable.

Your key argument seems to be that naive developers might use JS crypto because of this. If they use it naively, I'm sorry for that. I'm also sorry if they exclude it naively because of other rhetoric. I hoped that this would generate more nuanced discussion that would cause them to be more aware of the risks if they chose to use it. Obviously that wasn't the case and nothing was learned about the various angles from this app could be exploited aside from the trivial one.

I truly appreciate your laying out your reasons above. I understand I struck a negative chord but am glad that we've been able to move into discussing the specific technical issues.


Since it's too late to edit, I'm posting here to correct my statement that Chrome is the only browser that supports HSTS preload; Firefox does as well.


Boy you need to calm down tptacek. Personal attacks on the OP based on a highly disputable claim (it's just a description of a possible attack that doesn't fulfill the original conditions in the first place) is a bit unseemly and makes you sound desperate to an extent.


YOU'RE ALL WEASELS.


Great. Keep it classy.


Thanks. This is an attempt at doing three things: 1. Learning more about the security risks of JavaScript 2. Facilitating a more create and nuanced discussion of its pros and cons 3. Learning about interacting with HN

I failed at my second goal and while I've succeeded in the third it was in different way than anticipated.

I also to reiterate that this isn't some backhanded attempt at a product launch. I would have hoped that my statements, using an appspot domain, no design and limited functionality would have made that clear.


For the record: at no point have I believed this contest was a surreptitious effort to market a product.


The instructions are in a note.

If I understand your attack, you'd need to set up a clone website that resends the password in the clear, have a malicious network redirect me to it and get me to enter my password. I'm happy to enable to you to demonstrate that it works. The hardest part is presenting a valid SSL certificate. As I warned in the initial post, "I won't promise to log in if you can't provide a valid SSL certificate."

Email me and we can arrange the details.


No need to setup a clone website or to obtain a valid SSL certificate.

If I'm in a position to observe a notecrypt users's communication (the reason encryption is necessary), I can just run sslstrip (no SSL certificate required).

It will transparently intercept all of the notecrypt users's plaintext communication to notecrypt without generating any browser warnings.

At that point I also have the opportunity to modify any of the plaintext traffic as it passes by.

The user will be communicating with your actual website (no need to setup a clone), but I can just modify the JS as it is transmitted to the user so that it doesn't actually encrypt anything.

This attack just requires running a single command, no complex setup required. You might want to read more about sslstrip to understand how this works.


FWIW, my reading of the OP is that it's stipulated that the user will check for the presence of a valid SSL session (i.e. with in browser visual indicators) before login.

BTW, have been following your work on TACK with great interest.


That's a bad assumption. I could feasibly purchase the appsp0t.com domain, grab an ssl cert for nodecrypt.appsp0t.com, hop onto the LAN and run sslstrip, redirect the user to https://nodecrypt.appsp0t.com and wall-a, green address bar with a close looking address. That would probably fool me.

The "green" indicator is nice but definitely should not be relied upon to protect the user.

note: i used appsp0t as an example, no idea if its really available to be bought.

Edit: it's not letting me reply to the below comment (probably because this is a new account), but afair most browsers have fixed the IDN problem by checking for "suspicious" characters (characters that look similar to roman glyphs) and forcing the URL to be rendered in the full punycode URL.


I think it's also worth pointing out in Moxie's sslstrip talk he does go into detail on using IDN http://en.wikipedia.org/wiki/Internationalized_domain_name to spoof something similar (for non-english TLDs).

Not sure how valid that still is (as the talk is a couple of years old? I only watched it today), but it has to be assumed that a portion of users are going to fall for even a badly mimicked url.

Gotta say the IDN stuff is impressive in how generalised it could be. Terrifying. I'm convinced he's owed the $1,000.


And what is the method of bypassing the similar-glyph detection and the very strict language whitelists for different TLDs?


I said 'Not sure how valid that still is' for a reason :). I just found it extremely interesting and 'out of the box'. After reading up a bit on modern defences, I think you're right that it's irrelevant nowadays. (Unless you're in legacy hell, doubtful for the target user demographic)

But the parent comment is still valid, there's nothing to stop Moxie registering another domain even note-crypt.com or notecrypt.org or anything like that and the average user will be complacent with that. (same applies for appspot, note-crypt.appspot.com vs notecrypt.appspot.com vs notecrypto.appspot.com)

It only takes a single lapse in checking the domain and they've lost their login details and encryption key to the attacker.

Point is, the JS crypto does not add anything to the situation. All the security is provided by SSL, and once it goes, the JS doesn't help. It just gives the users a fake sense of an additional layer of security, which is dangerous.

Moxie broke the current SSL usage, and therefore, he broke the JS crypto (as he controls the communication channel). He beat the current state of the challenge.


To me it seems far too pedantic to give an award for pointing out, based on a forum post that blocks links (ask HN posts), that if you incorrectly use the partial url you are at risk.


That's a valid point, but I can't see it being an impossibility that the user will never accidentally stumble sending a http request rather than https. Whether that is user input, or a maliciously placed link.

My understanding is that the HTST header would make this attack less useful. But it's still a concern if you used Private Browsing/Incognito. The initial request will still hit a 301 (vulnerable to interception by MITM). I've just verified this with Facebook.com on my machine (Chrome 26, OSX).

I think it's quite fair to expect a user using this kind of site is likely to use Incognito.

I'm actually kind of surprised as I thought Chrome had a standard list of sites that use https only such as Facebook. (Woah.. seems the preload list is TINY http://www.chromium.org/sts )


Some day maybe we'll see browser-enforced secure DNS that has the ability to include certificates or set HTST. Maybe the same day ipv6 finally takes over in a few centuries.

I like the kind of pinning and preloading that chrome does but it's such a tiny gesture compared to the size of the internet, and nobody else seems to be trying to deploy better security.


Someday ;)

Perhaps there could be open whitelists where sites could nominate their sites as 'https only'. Wouldn't even need to be built into the browsers, could just be a thing people do when they launch a clean browser install, hit up https://blahsitelist.com and click a button that fires off https requests to all of those sites which would cache the HTST header? (I've only stumbled on HTST headers today, so I may be overly flamboyant as to their usefulness)

Although, come to think of it, isn't that just basically what the HttpsEverywhere extension does?


I am not seeing it's stipulated that the user will check for the presence of a valid SSL session. Can you clarify?


I'm offering my interpretation of the OP's original words defining the terms of the challenge "I won't promise to log in if you can't provide a valid SSL certificate :)".

So my read is that for this particular challenge there's only one "user", the OP. I read it as he's willing to facilitate the attack by using the site from a hostile network but has specifically said he'll check for a valid SSL session.

So by that interpretation, despite how much I think Moxie deserves another $1k for doing great work in general, it seems an SSL strip attack is unlikely to satisfy the offered terms.

Of course I expect an SSL strip attack would be effective against end users generally.


Intercepting the plaintext of the communications to the server isn't sufficient since the password is sent to the server after being hashed and the plaintext is encrypted. It may provide the basis for an attack but in itself it doesn't break the second layer of security.

Ironically, this is an advantage of having a second layer of cryptography in the browser. It forces an attacker to do significantly more work to acquire the data passively or requires an active attack.


Using his same attack, it's easy to spoof the js crypto libs to be insecure/have a backdoor, while the site appears unmodified to the user.

Even worse is that the attacker only really needs to spoof the JS assets once, and set an extremely long expiration date in the response cache headers, and then he's poisoned your site until the user forces a reload or kills his cache.

tldr; you owe mox $1000


I suggest that you concede.

It is easy enough for you to demonstrate it to yourself with the components that he describes.


You won't need any SSL. A user's typical first interaction with notecrypt will be through http://notecrypt.appspot.com (which just needs to evil reverse proxy to the actual site or an emulation, while backdooring the javascript to send the password somewhere).

The client never sees any SSL.


Didn't Iran get a valid SSL certificate for *.google.com a while ago?

Edit: Found an article, by cperciva no less: http://www.daemonology.net/blog/2011-09-01-Iran-forged-the-w...


Among others. They were using them to conduct targeted and dragnet man-in-the-middle attacks in the country for weeks or months before any users spoke up and browsers were patched.


If Chrome was involved, it aggressively refuses to use anything but the whitelisted fingerprints and will notify Google if MITM is detected for any high profile hosts.



This isn't a product launch. It's a demonstration to focus discussion.


I like that! From now on, when you want to introduce some new crypto idea, just make sure not to call it a "product", then issue a $1000 contest to assure people the idea works. Why doesn't everyone do that?

Oh, wait, I think Schneier answered that:

Contests are a terrible way to demonstrate security. A product/system/protocol/algorithm that has survived a contest unbroken is not obviously more trustworthy than one that has not been the subject of a contest. The best products/systems/protocols/algorithms available today have not been the subjects of any contests, and probably never will be. Contests generally don't produce useful data. There are three basic reasons why this is so.

They are:

1. The contests are generally unfair.

2. The analysis is not controlled.

3. Contest prizes are rarely good incentives.

I'd submit that (1) doesn't count here, because the idea you're demonstrating is so obviously flawed that contestants aren't at any disadvantage. But (2) and (3) are absolutely valid here: there's no structure to the contest (it's a bunch of Hacker News people poking at a page at random with no collaboration, milestones, or test plans), and $1000 buys ~3 hours of cryptanalysis work if you source it from software security people instead of actual cryptographers (who bill north of $450/hr).

I have no idea who you are and so I don't want to sound like I'm offended by what you've posted. But you are like the 100th person to staple SJCL onto a web app and posit that they've created something more secure than a private wiki. Actual professional cryptographers have addressed similar claims in the past. Here's Nate Lawson:

http://rdist.root.org/2010/11/29/final-post-on-javascript-cr...

Instead of the brinksmanship of offering a contest, why don't you instead just listen to the arguments people are making and try to learn from them?

Triple bonus points for noting that AES and SHA3 were the products of design contests, after Schneier wrote this, and then observing the differences between those design contests and the one at the top of this thread.


Some famous contests: RSA_Factoring_Challenge[1], RSA_Secret-Key_Challenge[2], Pwn2Own[3]. In fact, the RSA Secret-Key Challenge was organised "with the intent of helping to demonstrate the relative security of different encryption algorithms." Now they are providing way more money, but overall I think that while contests dont prove anything, they certainly help improve consumer confidence and potentially help fix non-critical security bugs.

[1]http://en.wikipedia.org/wiki/RSA_Factoring_Challenge

[2]http://en.wikipedia.org/wiki/RSA_Secret-Key_Challenge

[3]http://en.wikipedia.org/wiki/Pwn2Own


That article provides a very solid argument for javascript crypto having no advantages over server-side crypto and being harder to do without errors.

But just because it has no advantages doesn't mean it won't work. A three wheeled car will still get me from point A to point B.


I agree it only proves security up to a certain value. Would you be happier if I increased the reward?

Regardless, what do you think of someone who publicly calls something a terribly insecure idea, and then reacts with snark and doesn't actually attempt to crack the system when the opportunity is put before him by someone willing to bend over backwards to help him crack it?

Propose a plan to actually crack the system. Do you want the database? Do you want to control the network where I log in?


Imagine if people built bridges the way you propose building secure software. "What do you think of someone who publicly calls a new bridge design unsafe, and then reacts with snark and doesn't actually attempt to destroy the bridge when the opportunity is put before her by someone willing to bend over backwards to help her do it?"

Engineering doesn't work that way. Your proposed solution doesn't become more sound simply because you feel aggrieved at the way people react to it.

Also: it's deeply dishonest to suggest that the only reaction you've received to this design is "snark". As I pointed out above, with a link and everything, and as you yourself acknowledged in your original post, you've been given a litany of reasons why your proposed design is flawed. You just don't seem to like hearing them.


I'd expect them to propose a scenario that could be tested either with software or a miniature. You're under no obligation to but examples are persuasive. Feynman's ice water demonstration convinced far more people than his well-reasoned appendix.

I'm attempting to give those who believe that JS crypto should never be used a way to make a clear public demonstration. What better target could you ask for? A web application hacked together in a few hours by someone with no training in computer security who isn't even a professional software engineer and who is willing to arrange scenarios favorable to the attacker.

I'm not aggrieved by the negative reaction. I have no skin in this game. I don't earn anything if people walk away believing this idea is more secure. I just wish you wouldn't keep repeating the same canard about having to bootstrap the crypto on every use while attacking the messenger and the manner in which the message is being delivered.


I'm inclined to agree with the original poster. If it's really as deeply insecure as you claim it is, you should be able to crack it in less than the 3 hours of time (that you also claim the proposed prize is worth). The OP even offered to increased the prize money if you think it'll take longer than that.

In other words, sometimes, you have to put your money where your mouth is. The OP is doing that. You're not.


If it's actually _as_ insecure as you say, why not collect your $1000? You'd make it back even more with gained reputation of your status as a security expert.

Instead, I'm inclined to believe you just can't.


I have put a sticky note on the front of my laptop. This sticky note contains a single english word.

If you can tell me this English word, I will paypal you $10.

--

Despite no one winning my "contest", my sticky-note encryption system is not particularly secure.


I'll happily do that. When can I swing by to chat?

Different systems are secure against different threats. Sticky notes are secure against people who can't get physical access to their location.

The question in this case is what attacks this experiment is secure against.


You posted your message because you said you wanted the discussion, and to see the flaws in your argument. But when people are answering you, you seem to be working hard to ignore them ;)

I've taken the sticky note down. It had the word "contrite"

There were plenty of exploits you could have done, without having local access to my machine. Just one example- If you view my profile, you can determine the company I work for. Use linked/etc to message someone there, and offer to split my generous reward with them, if they tell you the word.

Just because no one took me up on my offer doesn't mean my sticky-note was secure.

Yes, of course it was a silly "contest", but that's the point - Contests like this don't prove anything at all about how strong or weak your solution is.


>You posted your message because you said you wanted the discussion, and to see the flaws in your argument. But when people are answering you, you seem to be working hard to ignore them ;)

Where are these people that are actually suggesting plausible attacks rather than just mocking the idea of a contest? I don't see them.


I welcome the discussion and am learning what I can from it. I was hoping for an exploit. So far one person has proposed one that may work.

Those are creative solutions to your contest. I should have been cleverer.


The word was 'contrite'. You didn't set a time limit on your contest.


Ratatouille!


Don't you get it, it is a catch, the sticker contains the phrase "a single english word".


calm down and let them have some fun mr. security expert.

Bruce Schneier is not God. stop kissing his butt. ;)

By the way....

"3. Contest Prizes are rarely good incentives..."

meanwhile...

"Our Twofish cryptanalysis contest offers a $10K prize..."

"2. The analysis is not controlled..."

meanwhile....

"..There are no arbitrary definitions of what a winning analysis is...We are simply rewarding the most successful cryptanalysis research result, whatever it may be and however successful it is..."

LMAO...this is from the same article!

"The above three reasons are generalizations. There are exceptions, but they are few and far between. The RSA challenges, both their factoring challenges and their symmetric brute-force challenges, are fair and good contests. These contests are successful not because the prize money is an incentive to factor numbers or build brute-force cracking machines, but because researchers are already interested in factoring and brute-force cracking. The contests simply provide a spotlight for what was already an interesting endeavor. The AES contest, although more a competition than a cryptanalysis contest, is also fair.

Our Twofish cryptanalysis contest offers a $10K prize for the best negative comments on Twofish that aren't written by the authors. There are no arbitrary definitions of what a winning analysis is. There is no ciphertext to break or keys to recover. We are simply rewarding the most successful cryptanalysis research result, whatever it may be and however successful it is (or is not). Again, the contest is fair because 1) the algorithm is completely specified, 2) there are no arbitrary definition of what winning means, and 3) the algorithm is public domain."


Our Twofish cryptanalysis contest offers a $10K prize for the best negative comments on Twofish that aren't written by the authors. There are no arbitrary definitions of what a winning analysis is. There is no ciphertext to break or keys to recover. We are simply rewarding the most successful cryptanalysis research result, whatever it may be and however successful it is (or is not). Again, the contest is fair because 1) the algorithm is completely specified, 2) there are no arbitrary definition of what winning means, and 3) the algorithm is public domain."

This contest encrypted something with AES, stuck it in a database, attached a web app to it, stapled SJCL to the web app, and then said "decrypt the encrypted data in the database and I'll give you $1000".


No, I've invited you to propose attacks and help you carry them out. Do you want to try to use a malicious network to inject code to steal my password? Find a vulnerability in one of the endpoints to take control of the server? Try to find a cross-site attack?

Propose a practical attack against this app and I'll help you carry it out.


But the claims about JS crypto deal with the users' security in live settings: they usually are not related to "app" issues but users', which is not the same.


That is what I understood but it seemed too dumb to me. But it looks like it. That is not browser security, that is just AES & or possibly server-side (what id has the item?).

I am at a loss.


so? it was interesting. don't be a hater.


So? You guys wrote an article about it...

http://www.matasano.com/articles/javascript-cryptography/

"SJCL is great work, but you can't use it securely in a browser for all the reasons we've given in this document.

SJCL is also practically the only example of a trustworthy crypto library written in Javascript, and it's extremely young.

The authors of SJCL themselves say, "Unfortunately, this is not as great as in desktop applications because it is not feasible to completely protect against code injection, malicious servers and side-channel attacks." That last example is a killer: what they're really saying is, "we don't know enough about Javascript runtimes to know whether we can securely host cryptography on them". Again, that's painful-but-tolerable in a server-side application, where you can always call out to native code as a workaround. It's death to a browser."


I'm lost. What are you trying to say here?


I think what is trying to say, is: let peoples learn by their mistakes. I really don't understand your focus on making a kind of witch hunt anytime someone try to learn and implement crypto. It is certainly the responsability of the developer to try not making mistakes but it's also the responsability of the user to know what to expect of what he is going to use. And I think most people on HN are smart enough to consider this kind of post with a grain of salt and not expect too much of it.


It's starting to piss me off when people say I'm on a witch hunt for people learning crypto. I'm obviously not; I'm in the middle of dealing with literally hundreds of 1-1 conversations with strangers to help them learn crypto:

http://www.matasano.com/articles/crypto-challenges/

My problem is with people who don't want to learn crypto, but do want to use it anyways.


> I really don't understand your focus on making a kind of witch hunt anytime someone try to learn and implement crypto.

There's a stark contrast between someone wanting to learn crypto (and being humble about the process) and someone who's new to crypto but being anything but humble.


Just a bunch of pompous platitudes. As the oft-cited matasano article on JS crypto. I didn't know crypto professionals needed to be so self-aggrandizing.


> As the oft-cited matasano article on JS crypto

Considering that tptacek is a Matasano researcher, why wouldn't you address the specific issues you have with the article? It doesn't seem filled with platitudes, and it's not self-referential at all, much less self-aggrandizing.

What is self-aggrandizing are people offering crypto snake-oil with bluster and boasts and contests instead of entering into the dialog and state of research. It's a very good thing that the establishment is skeptical and careful about this sort of thing.

The weirdest part is that everyone seems to take this so personally. Sarah Flannery, a teenager at the time, took it better when her crypto algorithm was broken. Her attempt was no slouch either.


Have you considered asking Bruce Schneier? Because the rebuttal to "here's a contest, show that what I'm doing is insecure" is his, not mine.


This is bad even as contests go.

1) You don't provide a concise summary of what threat models you are considering. So in that case, my attack is: I break into your server and silently swap your code with code that ships me your user's password.

2) You don't even give a human readable listing of your site's source code. Reverse engineering is tedious work that's entirely different from cryptography or security analysis. You can be sure that any motivated attackers will have skilled reverse engineers deconstruct your system and then hand off their results to skilled security analysts/cryptographers.

But that sure as hell will cost more than the paltry $1000 you're offering here.


1. I proposed a few examples: Crypto flaws that you could exploit by giving you the ciphertext (only 1 person has asked), cross-site request vulnerabilities, and attacks from a malicious network. I also invited you to propose your own attacks.

2. You have the client side code. I'm happy to provide the remaining code if anyone is actually curious.


As you can see upthread, "malicious network" has already been invoked. I think you owe Moxie Marlinspike $1000.


I've emailed him to arrange. If it works, he will receive the reward.


"If it works"?


1. I don't see how any of these threat models can't be addressed by doing server-side crypto. Server-side crypto libraries are more mature, more reviewed, and thus by default better from a security standpoint.

2. The client side code is obfuscated. The server-side code is not provided. To perform a reasonable analysis of a system's security, a reviewer needs unobfuscated access to both. Why would you try and make security and crypto researchers de-obfuscate or guess your code, when that is not their specialization?


1. As discussed in another comment, this requires a similar level of trust to encrypting the data on the server. The only difference here is that, to the best of my knowledge, an attacker who took control of the server, could silently log the plaintexts in the server side case while some indication would be provided in the client-side case. Most wouldn't notice it but some would eventually.

2. The code I wrote isn't obfuscated. The code at the top is just SJCL and angular.js which are https://github.com/bitwiseshiftleft/sjcl and https://github.com/angular/angular.js


1. A server could silently insert code that replaces your keys with a well known one. Don't kid yourself into thinking anyone would notice this if it was silently tucked away in the minified angular or SJCL code.

2. Your definition of obfuscated and mine clearly differ. I'll be more direct: where's the (well-commented, clearly coded) model/controller code?


1. Agreed. That's an avenue by which a vulnerability in the server or communication channel could be exploited.

2. It doesn't exist. The code was written in the same form it was uploaded.


Here's the issue I have with in-browser JS crypto: let's say I compromise your server, I can now send code that uploads all of the user's keys to me, without needing to compromise their individual machines.

There are similar issues with auto-update mechanisms; if someone owns the chrome auto-update system then they can run arbitrary code on everybody's machine. That's higher-stakes, but also more secure than the average webserver (which if javascript crypto were to become ubiquitous, would become an issue)


What's an alternative implementation that encrypts the data at rest and is more resilient to a compromised server?


A downloadable client that doesn't bootstrap its own crypto every time it runs from your server.

Can I have my $1000 now? Actually: I'd prefer if you just sent it to Partners in Health; they're great.


We can argue about the merits of downloadable solutions but that isn't a flaw in this application. I'm willing to make it easy for you: You can even have the database to let you crack it without first having to break into my server.

I agree with you that there are risks to browser side crypto but how are they any different from upgrading software without reevaluating the security of each upgrade? Browser-based crypto also has benefits; most users prefer using web applications to installed all else equal because it's easier.


Because normal users don't upgrade their applications every time they run them? I feel like people have to entertain this argument every time Javascript crypto comes up, and that it's a self-evidently silly argument. Would you feel as comfortable running SSH as a Java applet delivered from a website every time you needed it as you do running /usr/bin/ssh? Of course you would not.


While I understand your point and agree mostly with what you're saying, it's worth noting that a functionally equivalent thing happens with /usr/bin/ssh on most linux systems.

A remote web site is consulted (usually via plain http, not even https!) and new code is downloaded. Signatures are checked locally (using gpg, which is itself bootstrapped from trusted installation media, and can be updated at any time like any other package), and then new software is installed.

The update cycle is longer, but it's still there. Under the threat model of "your source of software is compromised", no system is safe.


The software source for stuff like /usr/bin/ssh is a thing that has some serious infrastructure, well-established procedures and well-organized people. Right from the start the trustworthiness of that source is higher than average, arbitrary site.

I say "arbitrary", because I assume you would want to serve this JavaScript solution from your own site, the way people do with jQuery.

On the other hand, if you do decide to serve it from one central place, then you could establish similar level of trustworthiness for that source, but your solution would still be at a disadvantage. Running JavaScript is like downloading new software to your home directory on a regular basis and executing it; you don't need to compromise the software source for /usr/bin/ssh, you just need to compromise a less trustworthy bit of software and then make that bit modify the in-memory behavior of /usr/bin/ssh (because JavaScript doesn't have the kind of security the OS gives processes).


> The software source for stuff like /usr/bin/ssh is a thing that has some serious infrastructure, well-established procedures and well-organized people. Right from the start the trustworthiness of that source is higher than average, arbitrary site.

I agree. That doesn't change my point: JavaScript or binary packages, if you're running crypto you need to be able to trust the source of your software.

> Running JavaScript is like downloading new software to your home directory on a regular basis and executing it; you don't need to compromise the software source for /usr/bin/ssh, you just need to compromise a less trustworthy bit of software and then make that bit modify the in-memory behavior of /usr/bin/ssh (because JavaScript doesn't have the kind of security the OS gives processes).

This analogy is strained. It only really applies if your site is vulnerable to XSS attacks. After that, JavaScript is generally far more restricted than your average userland code. JavaScript from one site can't modify javascript from another (well-constructed) site. Whereas you correctly point out that once you can execute arbitrary code with user level privileges, the game is usually up.

In particular, x session keylogging means you can grab a sudo password from a user once you can spy on their session, which any userland program can do. JavaScript from attack.com can't monitor keypresses on trustedbank.com unless it's explicitly included (which can happen by accident via XSS).

Look up the same origin policy to see the kind of constraints that are usually placed on JS code.


So try to articulate why you wouldn't be comfortable just doing a "curl foo.com | bash" every time you SSH'd to a site. Obviously, very few people would be comfortable doing this.


Because I wouldn't trust foo.com as a source of software, in your hypothetical example. If we're being hypothetical, I would have no problem running something like:

$ wget https://openssh.org/releases/openssh-blah.tar.gz ; tar -xf openssh-blah.tar.gz ; make && ./bin/ssh host

If such a thing were possible (and not hideously slow). I agree with what I believe is your thesis: good crypto is hard to do right and that you should be very careful with which sources of crypto software you trust. I just wanted to point out that trusting trust is a big problem in our industry, and isn't magically solved by using binaries on your local machine.


How is the frequency of upgrade relevant to the security of the system? If given a new version, people tend to upgrade and I take control of your software update mechanism, how is the security risk any less? You'd presumably argue that we should have higher security standards for a software update mechanism than a web application. If that's the case, would you agree that any web application that's as secure as a software update mechanism, should be allowed to distribute JS crypto?

Edit: I have no specific problem running ssh as a Java applet if I trusted the site and connection. How do you determine when you're comfortable upgrading ssh or any other security program through your package manager?


I gave a specific example. You fled to abstraction. I think my example is illustrative. You haven't explained why it isn't. So I'm going to decline to engage with your abstractions.


Well honestly the main reason I don't want to run the code in your literal scenario is the phrase 'java applet'. I wouldn't be very happy with a permanent one either. I think it's fair to focus on the downloading aspect, and ignore the issues of convenience/java. Then we see that people like AnIrishDuck are perfectly happy with such a scenario.


> most users prefer using web applications to installed all else equal because it's easier.

Except that all else is clearly not equal: your solution is insecure.


A signed applet/activex/etc. or browser plugin signed by a different authority, or under some kind of multi-party audit and control.

There is definitely something to building "hostproof" applications. Javascript delivered from the same host isn't how to do it. The long-term solution is probably some kind of packaged thing similar to a binary, and some means of separating out who controls the server infrastructure from who controls that code. Even if it's the same organization, there's no need for the code signing key to be "live" on the Internet, so someone who compromises front end servers shouldn't be able to compromise the binaries.

Browser java hasn't lived up to any of its promises, and I'm not sure about native code execution mechanisms. webcrypto might be a way to solve this general problem in the future.

There is of course the bootstrapping problem when you first visit a site, but there are out of band ways to try to address that.

The other option is just to give up and do mobile stuff; a mobile development framework combined with server-side development framework designed to build "hostproof" stuff would be pretty cool. You get a trusted platform, potentially can do trusted stuff server side too, have a reasonable third party for distributing updates (at least on iOS) with some level of auditing in case of a bad app being served, etc. The problem is that if you can't trust Apple, you're kind of doomed; the ideal would be a per-enterprise root of trust, where no one else has the ability to push OS/app updates, and can actually verify how the hardware works. There was some work done by NSA with Android in that direction.


Also remember a "compromised server" could be DHS serves you with a warrant that forbids you from discussing the warrant.


> if someone owns the chrome auto-update system then they can run arbitrary code on everybody's machine

It would be easier to compromise a large browser plugin's source control. If say, AdBlock was compromised, that's tens of millions of users that would be running malicious code with absolutely no indication that the update even occurred.

With DOM level access on every page a user browses to, an attacker could just compromise passwords, inject their own advertising, attempt to gain higher access, etcetera. For the case of the OP, that would mean that the client side encryption would be rendered worthless.


What does this provide over just using SSL and deriving a key from the user's password and doing crypto on the server? Nothing.

Anyone that compromises the server can compromise the client and have it send the keys back.

Your test is just pointing that SSL is useful; all the security hinges on SSL.


Having a server compromised is bad. Which is worse, the ability to read plaintext on the server silently or forcing a code change to be distributed to every user, however subtle?

That said, the key for security is to design an app where compromising the server isn't possible.


That said, the key for security is to design an app where compromising the server isn't possible.

That's simply not possible. There's always an exploit somewhere. Continually kicking the can of security "upstream" and being confident that your stuff is secure so long as no-one else messes up is naïve and myopic.


In that case is it possible to design a secure application? Can you design an application used by real world people that will remain secure even if the infrastructure that controls it is taken over?

If a server is taken over, the best one can hope for, is sufficiently layered defenses to limit the immediate gains of the attacker and an alarm to warn users not to trust the application until further notice.


However subtle? It's entirely transparent to the user!

Even if the user is somehow checking the exact JS bytes received (which is so pointless a stretch), what if the server is already compromised the first time the user gets there? Or they're on another computer?

And then, how do you deliver updates? If I compromise SSL or your server, I'll use that channel to tell people "we updated the code, it's good".

How is the user supposed to determine to trust the JS in the first place?

This literally boils down to strength of SSL.

Create something in JS that doesn't rely on SSL, and then you'll have something to show off.


Hmmm, code-signed js would seem to fit the bill, no? I guess that fundamentally, it is still using the same CA structure that SSL is. This, however, has no solution even for compiled applications: if the servers private key is compromised then all bets are off.


> Which is worse, the ability to read plaintext on the server silently or forcing a code change to be distributed to every user, however subtle?

I would say that the latter is far more severe. People are going to be a lot more complacent when they believe that the data they are storing can't be accessed by others, or is otherwise immune to attacks. There's also the distinct possibility that the user is going to be storing sensitive information that they wouldn't normally store anywhere but locally.

Don't get me wrong, I love encryption, I just get uneasy around browser implementations. There's some very fragile ones in use at the moment (see mega.co.nz, blockchain.info).


I'm uncomfortable with arguments that are equivalent to PerceivedBenefit(X)>RealBenefit(X) and RealBenefit(X)>0 but NegativeBenefit(Actions|PerceivedBenefit(X))>RealBenefit(X) so X should be banned.

Since PerceivedBenefit(X) and Actions will both vary by individual and are controllable through learning and discipline, I'd rather give individuals the right to choose.

Equivalent examples: Seatbelts are bad since people will drive recklessly; calculators are bad since people won't bother learning arithmetic; cars are bad since people will walk less. In each case, the negative does occur. I'd argue that as a whole society is better for all of those.


The problem with in-browser crypto is way bigger than the feasibility of properly implementing crypto in the browser--it's the fact that you can never ensure the security of the network. As such, I think the prevailing argument against in-browser crypto is that it provides a false sense of security.

EDIT: Nevertheless, this seems like a neat project! I'm not anything resembling a pentester though, so I'll leave the challenge to the experts :)


Network security is a problem but breaking https doesn't seem trivial in the wild. That's why I'm inviting people to try exploits on a real app and willing to use a malicious network to let them break it.


So let's talk this through.

I think the idea behind crypto in the browser is cool, but it seems like anytime you're requesting the JS from the server, you need to trust the server -

Instead of serving crypto.js, you could server plaintext.js, and I wouldn't ever notice the difference, would I?

So if we agree that I need to trust you, to serve the JS crypto properly, then what's the difference between doing that and trusting you to encrypt the text for me?


Yes, you need to trust me. I'd argue it's easier to trust me if the crypto is happening on the client since if I change it, I could be caught. The probability of being caught in any one instance is very low but over many instances is high.

Would you make the same argument for native application? Unless you read the source code every time you upgrade, you're exposed to the same risk. cperciva did an inadvertent real world experiment on this by making an error during a refactoring of Tarsnap and it took 19 months for a subtle crypto bug to be noticed.


If you changed it globally, you might be noticed next time someone did an audit, which I suspect would be rare.

If you added a backdoor on the server that said "If userid == e1ven{crypto.js = plaintext.js}" it would be much harder to detect.

Hushmail did something similar with a modified version of it's Java applet, back a few years ago-

http://www.wired.com/threatlevel/2007/11/hushmail-to-war/

My point isn't that you should never trust webapps - It's that doing the processing in JS, or on the server, doesn't CHANGE the threat-model. In both scenerios, I need to trust you an identical amount, so there's no security advantage do doing the crypto on the client side.


Well, it's slightly different. Every person might not check every single time, but it would be possible to do something like write a browser plugin (or setup some sort of automated system) to check what was going on since the execution is happening client-side.

I made sure to emphasize 'slightly' because this could just trigger an arms race to getting around detection. You might be safe if you kept your detection methods a secret (and only personally used them). On the other hand, if the detection methods were used/deployed on a wide scale, then anyone trying to compromise you would just work around the detection.


What would that browser plugin check for? How would you write it?


- Check signature of the incoming JavaScript against known good versions.

- Check the signature of the HTML page against known good versions.

- Check that the information posted back to the server 'looks encrypted' vs. plaintext[1].

- Check the external resources that the page is requesting. Is it grabbing Javascript files that are unexpected (e.g. trying to serve up a known-good version of crypto.js, but then overwrite its methods with another Javascript file).

I'm not sure if many of these things would be possible in Chrome/Chromium, but probably in Firefox.

[1] Obviously 'looking encrypted' isn't some sort of binary decision, but I'm guessing there is some amount of checking you could do to see how closely it resembles random noise. If you sent random noise, and it wasn't encrypted, it would probably pass this check, but most people trying to protect something are probably sending something that won't trip this 'alarm.' This is not fool-proof, but adds a layer of protection when used with other things.


"Looks encrypted" isn't useful; encryption under a known key or with unsafe parameters "looked encrypted" too.

Are you sure you've captured every case that could influence what functions are bound to what symbols in the JS runtime?


  | "Looks encrypted" isn't useful; encryption
  | under a known key or with unsafe parameters
  | "looked encrypted" too.
Those go without saying. It would have to be part of a layered approach, and would catch stuff like plaintext going out.

  | Are you sure you've captured every case that
  | could influence what functions are bound to
  | what symbols in the JS runtime?
I'm not. I wouldn't trust myself to implement such a thing (at least not without a lot of peer review from people I trust as knowledgeable), and even with such a 'detection' plugin, I would be wary of using in-browser crypto.

I'm curious what other inputs into the system you think there could be though. If you verify the HTML, and the external resources against 'known good' versions, then what else is there?

- Maybe there's malware already installed on the client system that's a threat, but that's a threat to everything, not something specific to in-browser crypto.

- A man-in-the-middle attack is mostly mitigated by using SSL (though not 100%).

- A compromised/malicious server, will end up changing the JS and/or HTML, which would (hopefully, if you've done a good job) not pass your verification checks.

- The other possibility would be a browser exploit that somehow is triggered before the plugin can raise a red flag about unverified JS/HTML.

--

The entire point of my posts in this discussion thread was to say that crypto in the browser vs. crypto on the server may have the same threat model (trusting the server + SSL), but they are not exactly the same. With in-browser-js crypto, as the client you have full access to the environment where the crypto is running. If it's happening on the server, it's a blackbox to you. This opens the possibility to have software running on the client side to verify that things are kosher. In the end, by the time that you're writing the software on the client-side to verify things you may as well just be doing the crypto in a browser plugin rather than in JS. I realize that it's mostly an academic argument.


Once you have a plugin to check all that stuff, why not have the plugin do the crypto, and just skip the javascript?


That's obviously the better solution. I'm just being pedantic about the crypto happening in the browser vs. on the server.


Once Colin noticed he had the wrong nonce value for his CTR mode encryption, he was able to fix it once, tell users to upgrade, and nobody ever had to consider it again.

On the other hand, every single time someone loads your page, the server has the opportunity to surreptitiously add a tiny snippet of Javascript that would fixate the CCM IV your code generates. Nobody would ever notice that, and you could do it selectively (or, more likely, based on a court order).


You're implicitly applying different threat models to browser-downloaded vs. alternatively downloaded code.

If the maintainer of JS code make a mistake and fixes it in future versions, there's no need to worry about it anymore either. However, every time I upgrade I'm just as much at risk from someone attacking either my connection or having attacked the server. So to believe it's more secure, you have to believe those things are easier to do in the context of a web application.


There are a few legal reasons: there is a difference in the expectation of privacy when you never actually hand the plaintext message to the server. This is the logical equivalent of a the difference between letter mail and a telegraph. With a telegraph you have no expectation of privacy and therefore no legal right to keep that information secret as opposed to your letter mail. Furthermore, the government cant legally force a company to not insert a backdoor but crack their own security (at least as of yet). The latter statement meaning that it cant force a security company to hack their customers to get their keys. I think that while these dont result from any specific crypto issue, they are very real legal issues.


A breach of trust would be detectable in client side encryption, but not in server side. An average user wouldn't notice a difference, but a security researcher could. Any high-profile service that systematically tampers their own client-side encryption would very likely be caught.


You have acquired just enough rope to hang yourself with.



I don't see any key being stored client side. So you must be deriving the key from my password?

If that's the case, then my attack would be:

* Loop through a list of common passwords.

* Derive keys from those passwords.

* See if any decrypted results return an ASCII-like result.


That would work for a sufficiently broad definition of common. You're welcome to try. The challenge is that it'll take a long time unless you have incredibly fast hardware or are very good at guessing.


Eh, whatever key derivation algorithm you are using is running in a browser, and isn't throwing any long-running JS exceptions. And I didn't see my browser block. So it can't be that expensive.

From an academic viewpoint, it's not really strong encryption if your AES key is derived from a password. Your key space is limited to ASCII characters, and 99% of users will not choose a strong password. So from my perspective, if you sent me a DB dump, I could read almost everything.


Email me and I'll send you the DB dump. You can brute force it . I don't think it's as trivial as you suspect but I'd love for you to prove me wrong.


It's trivial to brute force for anyone who has a weak password.

More importantly, it's trivial for an adversary who cares.

If I'm encrypting a note containing state secrets to send to a foreign intelligence officer, the NSA has the technology (and more importantly, the resources) to brute force their way in.

And if your password is too complex to crack (read: a 256-bit key), you probably can't remember it either, which means you have to write it down somewhere; so an adversary who cares would find an outside channel (subpoena, hack your personal computer) to determine your key.

What is your key derivation algorithm? PBKDF2?


The key is derived by PBKDF2 with 1000 iterations. For weaker passwords, I suspect a couple of order of magnitude strengthening would be required.

Your point about weak passwords holds in both ordinary clients and in the browser. It's just a matter of degree. There are plenty of sufficiently strong passwords that are memorable. Since the degree of weakness tolerable is logarithmically proportional to the hashing time and JS is usually within an order of magnitude of native code, the additional entropy required is small given equivalent hashing time.


Consider using crypto.randomBytes() for your key generation.


SJCL fills its initial entropy pool with crypto.getRandomValues()


Apologies, I missed that on first glance. I'm not sure why you'd want to run your CSRNG through a bunch of additional functions, at best you're doing nothing, at worst you're reducing the quality of the entropy.


NP. Checking code is a good. It's not the clearest code since the initialization is located separately from the main RNG function. If you actually tried to read the inline minified version of SJCL, I'm sorry but impressed.


Can you explain what this means exactly: "I'll navigate to any URL you send me while logged into the site."

If I give you a URL what exactly are you going to do?

Also, when logged into the site, are you logged into the account where the note with claim instructions was made?


I'll paste it into another tab in a browser that has me logged into the account or I'll log into the account after pasting the link. Whichever you prefer. If you have another proposal, as long as it's something reasonable, I'm happy to entertain it.


tl;dr: You've set up the challenge in such a way that demonstrating any of the threat models against which client side crypto is weak would require compromising other layers of security first that are out of scope for the challenge.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: