This article cuts to the heart something I've wondered for a long time.
The common advice in all the classic texts is that developers should not roll their own crypto because smarter people have thought of more vulnerabilities and addressed them in battle-tested code.
But news like this shows that there is an antithesis: the conventional encryption techniques are also potentially widely exploitable by state-level actors. Furthermore if I was someone holding solutions to cherry-picked primes for well-understood algorithms in wide use, I'd be complaining loudly every time someone wrote a bespoke library too. I'd be paying to publish books that recommend no one write their own crypto because it's just such a darned hard problem, especially with so many high quality alternatives out there tested and ready to go.
Granted, one should certainly have a repulsion to to writing custom crypto for all of the many good reasons, but it makes me think it's worth putting in more than the minimal effort into it, especially when lives are on the line.
> But news like this shows that there is an antithesis: the conventional encryption techniques are also potentially widely exploitable by state-level actors.
Are they? Really? We know that NSA has lots of vectors for exploiting the system AROUND encryption. We have pretty solid evidence that the NSA has weakened some encryption standards to make them crackable.
We DON'T have evidence that encryption developed outside the milieu of the NSA are equally crackable. And circumstantial evidence suggests quite the opposite.
The problem is that there are lots of things to get right to get crypto right, and getting even one wrong leads to failure. Which is why we tell people not to roll their own encyption and to let the experts do it.
Another problem with too many people rolling their own encryption is that it dilutes attention and no system gets sufficiently audited.
It took 33 years before it was publicly announced that GCHQ succeeded in deciphering Enigma codes.
Every single time they used intercepted intelligence to their benefit, they made sure to leave false trails of other ways they could've known so the Germans wouldn't suspect.
Well, from what I've read, that's why they didn't share information about the 9-11 team. And during WWII, they let some convoys go down to protect their methods. It's a tough call. One thread of Stephenson's Cryptonomicon covers this.
Yes, he is one of those authors. Every block of every thread ends with a cliff-hanger, and you need to read two more blocks before you get resolution. But then, you've encountered three more cliff-hangers ;) So after the first read, I bookmark heavily, and read each thread independently. And sometimes I just reread my favorite threads. Such as Bobby's thread in this book. Also his ancestor in Baroque Cycle, "King of the Vagabonds".
Neal Stephenson's next book is The Rise and Fall of D.O.D.O., co-written with Nicole Galland, currently to be released June 13, 2017. It seems to be something in a little different vein than his other works, but I'll read it as I do all his books.
Thanks! I don't know Ms. Galland's work, but it's not much of a stretch from Anathem ;) And consider that collaboration in Mongoliad. So hey, I know what I'll be reading in June.
I can't answer that at all, I'm just saying that if I ever get to the end of that 1100+ paper brick.. I sohuld probably travel to his town to express how good a storyteller he is.
If you haven't yet, read The Baroque Cycle too. It's set in the same universe, approximately 300 years earlier, with an ensemble cast of historical and fictional characters. Very long, very dense at times, and yet a masterful and very rewarding work in my opinion. The audiobooks read by Simon Prebble are also quite good if you don't have the time to read it in print but do have time where you could listen to it.
Yes! It's very cool to get the back-story of the Shaftoes and Waterhouses. And the enigmatic Enoch Root, who arguably appears as well in The Mongoliad trilogy. Maybe he's even mentioned in Anathem, but I may be stretching for that.
ENIGMA is easy compared to modern crypto; it did not have a sound mathematical analysis. Hell there was no such thing as proper cryptanalysis back then.
Academia is much more advanced now; all the tools we have for reasoning about our cryptography were developed in academia, with no evidence that the NSA has shown any interest in that kind of stuff. (In fact the NSA has expressed disdain for academic cryptography).
"There are hundreds of mathematicians in the NSA, far more than in academic settings, so readership for a typical paper is wider than in the world at large ..." [1]
I am still surprised that the Japanese and Germans did not figure out their codes had been broken. The disaster of the U-boot campaign was pretty good evidence of that, if nothing else.
Besides, expecting a widely used and deployed cryptosystem to be uncompromised for years is absurd. They should have assumed it would be broken, and developed regular replacements.
I am still surprised that the Japanese and Germans did not figure out their codes had been broken.
There must have been a lot of people who had suspicions.
But consider: for many years, US citizens who talked about ECHELON were considered crazies. Later, Bush's enormous surveillance expansion was mostly denied or dismissed. The 2016 Russian hacks of the DNC and the propaganda machine were brought up on national television during the debates. Yet there was denial, dismissal, and very little concern.
Without a plan for responding or reacting, denial is a very appealing way to deal with upsetting news. The Germans and the Japanese who were in a position to suspect that their communications had been compromised were also embedded in a totalitarian military chain of command, more focused on preserving the relative power of the people at the top than anything else. Questioning the efficacy of the system is easily cast as disloyalty. What could anyone do?
They organized airplane flyovers that "saw" the U-boats. The Germans did not know how many aircrafts were patrolling and whether it was a high or low probability of being spotted.
If the British could not organize a parallel construction they simply let it go. They knew the plan for Crete invasion but they could not create a story on how they learned it so they preferred to lose naval control of the large part of the eastern Mediterranean sea. [0]
Let's imagine you've figured out that the codes were broken in Hitler's Germany. The only solution is replacing an expensive encryption system with another, equally expensive system, including all the training that goes along with it.
Who do you tell? And who is the guy that going to go to Hitler to tell him that their unbreakable system is broken?
You go to Admiral Doenitz, who already suspected it was broken, and was talked into not changing it by underlings, not Hitler.
BTW, my reading books about it suggests that one was not executed in the military for questioning orders. One reason the German military was so effective is much discretion was allowed by underlings, as well as listening to them.
I'm not well versed on the subject, but I assume it was just another of those large-scale intelligence failures, like the https://en.wikipedia.org/wiki/Englandspiel only with the boot on the other foot. Groupthink in action again. Also, given the large number of important ciphers which were broken during the war, I'd guess wildly that the pre-war crypto communities (such as they were) were generally much too complacent about the risks from cryptanalysis, likely because ciphers had never been subjected to state attack on a Manhattan Project scale before. Comparable to the long time it apparently took for people to become generally aware of C buffer overflows as a serious security problem, maybe.
That's what we actually did, however RADAR was a new thing allowing a small number of British aircraft to regularly intercept Bombers. Without any evidence it must have seemed probable for something similar to be locating subs.
Detection worked just fine at night. The problem was: "Although the RAF control stations were aware of the location of the bombers, there was little they could do about them unless fighter pilots made visual contact." https://en.wikipedia.org/wiki/Radar_in_World_War_II
The codes were changed regularly but the system was compromised. Naval codes were harder to break and often the allies had long periods of being in the dark.
The British did significant amounts of data analysis and traffic analysis. e.g. estimating German tank production by looking at the serial numbers of captured / destroyed German tanks.
I don't recall anything about the Germans doing the same thing.
Another factor (according to the excellent Battle of Wits by Budiansky) was that the Germans were overconfident that Enigma was unbreakable. Turned out their confidence in the hard computation that would need to be done to decode Enigma was wrong.
This was not a trivial system for it's time. Do you change your ssh keys and certificates every day?
The enigma had a new encryption code for everyday distributed on paper and torn off and destroyed once used. The were different codes and machines used in different branches of the army/navy and the system was updated through the war.
The British didn't get to see the machines or it's method for many years. There were 159 quintillion possible keys and even a 1 million guesses/second it would take 5 million years to guess a code - and don't forget they changed everyday. Also, remember there were no computers to do this, let alone one that that could even remotely approach 1 million operations a second.
So you ought to able to see at the time people were pretty confident it couldn't be broken, and if they hadn't made some mistake in it's use e.g. distributing weather reports, it might have not been.
We are arguably much more complacent than they were vs. their time. It was only recently that perfect forward security became a thing in HTTPS for example (i.e. different key for each connection).
Bit unfair, dumping on people who had seventy or eighty years less experience, and profit from widely published literature and history on the topic.
I mean, it's not like they could even go read the Wikipedia piece on the German tank serial numbers info leak. Might have been a feature of a certain seminal strategy game - "you have defeated A[213 of 330]" ;-)
They did have extensive experience with spying in general, and compartmentalization of it. The compartmentalization was not applied to encryption. They also knew that losing an enigma machine to the enemy could compromise it (and did), but they just apparently assumed that no U-boot lost its enigma machine to the enemy.
By the time statistical evidence could have grown strong enough to shine through the careful layers of deception, they were far to busy not noticing that they were losing the war to notice that they lost the encryption battle. In a world of believers, only traitors quantify bad news.
Oh, they noticed all right. From "U-Boat Ace" by Jordan Vause pg. 103:
"retrieved a working Enigma machine along with the documents and code keys for three months. Not surprisingly, U-Bootwaffe fortunes declined in the following months, and from that point on Doenitz remained in doubt about the Enigma cyphers his boats were using. But the experts reassured him over and over again that they were sound, and so he retained them until the end of the war."
I haven't found anything on the subject, but he would have been alive when Enigma become public. It would be great to know his thoughts (and those of people like Speer).
We also know that the NSA hardened crypto against attacks that they knew but were not advertised to the public (DES against differential cryptanalysis, and SHA0). I guess the NSA is too big to be homogeneously the good guy or the bad guy.
Your comment shows a very dangerous misunderstanding of this issue. One of the reasons WeakDH/Logjam was so bad is because OpenSSL is so low-level that it forces application developers to supply their own DH parameters if they want to use Diffie-Hellman. Until recently, setting DH parameters required writing complicated code which required the application developer to understand obscure details of TLS like export ciphers, as well as whether it was safe to reuse temporary Diffie-Hellman keys.
In other words, OpenSSL forced developers to do crypto to use Diffie-Hellman. Unsurprisingly, developers did a poor job, and we got WeakDH/Logjam/CVE-2016-0701 as a result. The answer is not more hand-rolled crypto as you suggest, but rather better crypto libraries written by experts that leave as few details to the application developer as possible.
So basically your point is that if app devs can't get DH parameters right, how on earth will they get the harder stuff right? I think that's a fine point, but you aren't really addressing the core concern, which is that "better crypto libraries written by experts" are precisely the ones most likely to be vulnerable to state actors doing massive prime computations. And by extension, state actors benefit from homogeneity of implementation.
Even if it takes a day for a kid at the NSA to break my custom crypto, that's better than if I use homogenous crypto that takes 2ms to break. And if everyone is using bad custom crypto, that's a lot of time needed to break each one individually. The argument is very close to that against monolithic grain cultures that, while they have the best yields, resisitance to pests, and so forth, they are also more vulnerable to as-yet-unknown diseases. Bio-diversity is a good defense against that contingency.
No, because there's a simple solution to state actors doing massive computations: make the parameters so large (2048 bits in the case of DH) that the computations are infeasible.
DH precomputation attacks are only a problem with smaller prime sizes (e.g. 1024 bits). Experts have always known to avoid small parameters, but developers copy-and-pasted them into their code without knowing they were bad.
This is kind of parallels what happened with MD5 fingerprints in X509 certificates: for years the writing was on the wall that MD5 collision was imminent with better algorithm and more computing power, yet it took a number of practical exploits by individual researchers to convince the industry that MD5 has to go away. One good thing to have come out of it was that we really learned about the importance to future proof our crypto and the phase-out of SHA1 was carried out a lot more elegantly.
Similarly, it is expected now that state level actors can already break 1024 bit RSA and 128 bit symmetric ciphers may be the next thing to fall. Hence their use should be avoided wherever possible.
That's a very good point, but I don't think it invalidates the "biodiversity" argument. It's a different kind of defense, but it is a defense. For example, if there is a breakthrough in prime factorization, then heterogenous implementations may be the only security left to us.
I recall an interview with William Binney where he suggested simply rearranging your message's payload at the protocol level would be enough to thwart automated analysis. It would require human attention, which is ultimately finite.
The trade-off there is that it might draw unwanted attention. Such an approach may be more suited for use inside of a message already encrypted using standard means.
It's worth noting that if there's no network interception angle, the NSA can easily penetrate all but the most hardened systems should they be so inclined.
Binney is suggesting the proper approach - safe manual customizations that require manual cryptanalyzis, which is not scalable.
If only few % of users start to roll out their own custom crypto, not stronger than ROT-13, that would completely overwhelm the interception industry, including the state-level players. There is no automated tool that can cryptanalyze in real time someone's naive but unique idea what crypto should be.
In other words, this is shifting the paradigm where few high priests of the open source crypto design few key components for billions to use, and cryptanalysis has to deal with only few problems which are eventually automated, to the problem where tens of thousands of brains construct their own weird crypto which has to be manually cryptanalysed. Cryptanalyzers would need to employ millions to achieve this.
Feel free to do this on top of the blessed crypto stacks.
> There is no automated tool that can cryptanalyze in real time someone's naive but unique idea what crypto should be.
This might depend on how big the possibility space is. For example, you could imagine someone making a similar argument about transformations to passwords (like adding digits or punctuation, writing words backwords, changing letters to symbols that resemble them visually, and so on). A lot of people have independently come up with these transformations and correctly reasoned that they expand the space of possible passwords and add uncertainty (entropy) for someone trying to guess a password, especially if they're used in an inconsistent rather than systematic manner.
Still, password crackers have had great success explicitly enumerating this possibility space and brute forcing against it, basically since each transformation that people are particularly likely to think of only adds a relatively small amount of uncertainty, on the scale of the overall password cracking problem. This was already recognized at the time of Alec Muffet's Crack program (which let you use manually-specified transformation rules to expand your password-guessing dictionary), and attackers' heuristics and databases have only gotten more powerful since then. It's also discussed in the Diceware xkcd
Here Randall argues that the substitutions added about 11 bits of entropy, plus "a few more bits" because of uncertainty about which ones were used. That wasn't enough to defeat realistic adversaries in this context.
I'm concerned that a lot of text obfuscation methods that people would think of will fall into this category and perhaps provide only a handful of bits of additional entropy; for example, they might use familiar encodings that are well-known to puzzlers (like Morse code, binary, semaphore, the NATO alphabet), they might write things backwards or in a customized order at an n-gram level, they might use what is effectively a simple substitution cipher, and a few other possibilities.
I don't think it would be difficult to try an extremely large number of these possibilities, combined with sophisticated statistical tests for plaintext. I bet versions of these analysis tools that work pretty well were developed decades ago. Computers are very fast and combinations of brute force and statistical analysis are very powerful.
To be clear, I wouldn't discourage people from doing this (on top of standard communications security methods), and I would agree that it might sometimes provide a benefit. But I would discourage people from assuming that their homegrown text obfuscator is giving more benefit than using "Tr0ub4dor&3" instead of "troubador" as their password (a non-zero benefit, yet an inadequate benefit against sophisticated adversaries).
Maybe there's a way to change that picture, but I guess it would involve having people spend a lot more time studying and thinking about it, in the hope of greatly expanding the space of possible encoding transformations that they're capable of thinking up. And then you still have to be careful of statistical analysis. We might think we're very clever by inventing a new transliteration of our language into a script that was never before used to transcribe it, and then a statistical tool may still successfully analyze that transliteration as a simple substitution cipher and break it automatically!
This is true for manual (mental) text scrambling. I remember the time when I thought I invented the keyboard-pattern passwords.
However, if this home-brewed crypto involves coding (and requires users to learn an appropriate language), then it's possible that the entropy harvested from brains of such novices will be orders of (decimal) magnitude over the simple text scrambling.
For example, take simple DES implementation and customize S boxes and/or key generation phase. Do not publish the program except to your peers. Yes, S boxes filled with your dog's vaccination report are likely more sensitive to differential cryptanalysis - but how much ciphertext would that take, not to mention the analyst's time?
We are not talking here about someone specifically targeted. Custom obscurity works great when practiced massively, as de-obscuring has to be done manually in each case.
The bigger issue is whether privacy and secrecy should be simple so that anyone can use it. We may be well past that - standardized security is obviously dead in the water. If security is important and affects one's life, then one should invest effort in it as the one invests efforts in hygiene, education, car maintenance etc.
The requirement that security must be simple so that even idiots can use it is a diversion, a fallacy that imposes inherently weak crypto on all. This is the biggest lie plaguing crypto since the first crypto war.
I agree, but I wonder about how many people can be induced to devise a custom block cipher and somehow communicate an implementation of it to their communication partners in a way that isn't systematic and doesn't depend on the security of other ciphers. (Presumably you lose the marginal benefit if you make a popular "make your own block cipher and share it with your friends" tool!)
Well, it's been pretty tough to get people to adopt some existing crypto tools like PGP that represent a lower bar than writing your own crypto primitives. I'm completely sympathetic to the idea that this kind of literacy could lead to habits and practices that make surveillance harder and that that would be a great thing, but it seems like ubiquitous use of existing peer-reviewed crypto tools represents much lower-hanging fruit.
And in terms of spending, say, 3 hours learning to use PGP and/or get your friends to use it with you vs. 3 hours trying to write your own block cipher and/or get your friends to use it with you, I'd expect almost everyone in the world to get more privacy benefit from the former.
Well, if we agree that a moderate effort (comparable to dental checkups) would make a big and positive difference in the surveillance arena, and yet assume that 'people' cannot be induced to do it, perhaps we are dealing with ideology and indoctrination, not with technical problems. If so, no technical solution can be applied before the ideological issues are taken care of.
To (ab)use another analogy, look at the history of the tobacco (ab)use. It was hard to get people not to indulge in it just because they will have problems 20 years from now, which is close to the surveillance damage. It took years and huge efforts to cut down the tobacco use.
I agree with the ideology observation and probably also with the tobacco analogy, but I don't think I agree with the dental checkup analogy; here the difficulty is network effects.
Getting the people I e-mail with most to use PGP encryption with me would probably require 20 minutes to 5 hours, probably in person, with each of them. That is likely more time than people spend on professional dental care annually and, if I'm doing the setup, I have to spend that time with each correspondent.
Getting them to also use some kind of homegrown obfuscator or custom block cipher with me (but not with their other correspondents?) is another effort of several hours per person, again requiring meeting in person with each correspondent.
Both of these mechanisms are then very fragile if a given correspondent changes devices or operating systems.
You might argue that, for each person, learning how to use a tool like GPG is on par with getting annual dental checkups (possibly annoying and time-consuming, but just a few hours, and with potential hard-to-foresee benefits over the long term). And that's fine, but if we're still talking about adding completely homegrown custom layers intended to defeat automated cryptanalysis, the picture has gotten a lot more complex. By definition, the people adopting these solutions have to design and deploy them in a customized, personalized way, to avoid having large populations of people adopting the same tools and techniques. If this is supposed to be reliable (and somehow coexist with other custom techniques, so that Alice and Bob can use one homegrown obfuscation layer while Bob and Carol use a completely different one!), I expect it would be an order of magnitude more effort even supposing that most of the participants are already capable software developers.
I mean, maybe there's a way to extend OpenPGP so that you declare a "custom cipher" on top of the main ciphersuite and then implement it as a plugin so you "just" have to do the core work on your custom thing and not on the surrounding key exchange mechanisms, data format, and e-mail client integration... but it still represents a massive amount of additional work for each pair or small group of communicating parties, even given this hypothetical infrastructure.
If the perceived importance of privacy is high enough, a fraction of people will spend days and weeks acquiring block cipher customization skills. The goal should not be all people, as that's doomed from the start. 15% is infinitely better than nothing.
The task at hand is to elevate the perceived importance of privacy and the importance of self-defense/education to the point of writing some insecure (but unique) code. The latter is the novel point in this discussion.
Currently, the dogma is to explain the importance of privacy, and then point the newly aware audience to use one of the 2-3 options developed by 2-3 teams/companies. This will never work towards general privacy, because if all Internet users started using Signal, PGP, Snapchat etc. tomorrow, the day after tomorrow these products will be backdoored, and backdoored version pushed as an update, as these 2-3 teams have addresses and families, and engineers and CEOs in general do not respond well to anal probes, audits and accidents. This has happened so many times so far, that it boggles the mind how anyone with a shred of integrity can preach centralized pret-a-porter security solutions.
I do not know how to clearly demonstrate the importance of privacy today and the fact that you don't have centralized friends.
Backdoored tools are a pretty different threat from unknown cryptanalytic advances and should be addressed in different ways. A lot of research right now aims to address different backdooring methods (I gave a related talk at CCC a couple of years ago); a first step is to make sure that published source code corresponds to binaries and that all users receive the same software updates. Neither of these is guaranteed today, but we're making progress toward both.
A trickier problem is bugdoors, where someone intentionally introduces an exploitable vulnerability into a code base. The underhanded C contest points at some of these risks
It seems likely that this approach has already been used against some important software somewhere, regardless of the specific methods and incentives used to get it into the code.
Maybe the experience gained from these contests will point at ways of avoiding bugdoors in the future, whether formal methods, static analyzers, changes to language specifications, coding standards, improved auditor skill, or something else. (I'm personally a bit hopeful about formal methods; they've seen an encouraging new level of success in the academic world over the last few years.)
There are definitely principal-agent risks in having everyone use Signal or GPG. But you're not going to invent something as secure as AES by yourself, nor something as secure as Signal or GPG. The crypto world has made massive advances in the last couple of decades by respecting Kerckhoffs's principle. There are worthwhile arguments like
that you can get a security benefit against some threats by adding some non-globally-shared obscurity layers. That doesn't mean you can go off on your own and do as well as publicly-scrutinized stuff does.
I think I should get the authors of "Imperfect Forward Secrecy" to opine on this. They may have identified one of the decade's biggest security problems from cryptographic monoculture, but still I doubt they would agree that we can make much progress with home-grown communications security solutions. (But as discussed elsewhere in this thread, it would be helpful to have good guidance for secure composition of cryptographic primitives, so if you're not sure of one layer and want to add another, you can at least do so in a way that doesn't make things worse.)
Zero days, backdoors and unknown cryptanalytic capabilities have similar if not the same effect on centralized systems.
I am skeptical about open source efforts being a match for (tens of) thousands well-paid professionals working for well-funded organizations, 5 days a week, year after year. It just doesn't make sense that point efforts of few can accomplish anything sustainable and effective against such adversary. Yes, open source people are brilliant, heroic, etc. It doesn't matter. These efforts need to be distributed from dozens to tens of thousands, and that will begin to be the match for the organizations we are dealing with.
To eliminate all issues and pushbacks about basic security of home-brew crypto (although I think that in the big picture it doesn't matter - you are either targeted or they don't have resources to figure out your particular ROT-17.5), it should always be used on the top of your favorite official crypto stack. So don't worry about being inferior to AES - use AES as well if you trust it.
Home-brew crypto works. In the latest Stone's film, the whistleblower takes out flash storage hidden in Rubik' cube. It needed to work only once, for him. If it is important, people will do it.
In the big picture, we may be dealing with the elitism of brain-work. It feels warm and fuzzy to be a member of an elite group helping billions. But as in other human enterprises, the real and sustainable success happens only when such brain-work gets widely distributed. While not the easiest thing in the world, crypto is not esoteric rocket science, and the notion that those few that had to resort to programming for living and fun are somehow more mentally capable than the rest 99.9999% is pure bs.
> it would be helpful to have good guidance for secure composition of cryptographic primitives
Yes.
An easy-to read program executing generic Feistel-net block cipher with exchangable/modular S-boxes and trapdoor functions would be great. I'm looking into something like that.
I was guessing changing the protocol, like adding or subtracting a few aes rounds to a ssh connection so a standard connection doesn't even speak your protocol and can't connect.
This also reminds me of when somebody here wrote about meeting one of the cryptographers at a con for SHA3 (Blake?) and the guy shrugged off his efforts saying it was all for fun meaning whatever device you have is so completely back doored already either through sabotaged/weakened standards or outright (proprietary microcode) that believing any algorithm can keep a secret from nation state adversaries is laughable.
> believing any algorithm can keep a secret from nation state adversaries is laughable.
Well, yeah, it is.
Defence against (crypto-buzzword) state-level actors (SLAs) doesn't start with gpg and FDE, rather, with operational security, or, even better, not being a worthwhile target in the first place (if your briefcase of secret documents is already on CNN there is no need to infiltrate your stuff beyond gaining assurance that you don't have another briefcase - which is the thing you want to proof as publicly as possible to convince your highly paranoid adversary)
There is that saying "never trust a computer you can't throw out the window". Computers are networked. You're trusting a network of computers, even if you don't use "networked functionality". Can you throw the internet out the window? Probably not! Hence, don't trust computers. People who trust computers to keep their secrets are foolish¹ (Trottel), so just don't.
¹ Notice how governments²³ trust computers to keep their secrets safe, and how admirably and totally that failed. Ask yourself, what wasn't leaked by Snowden? Stuff that wasn't in any computer in the first place!
² Or how celebrities trusted computers to keep their dickpics safe.
³ Or how companies trusted computers to keep their business secrets, well, uh, secret.
To make my point another way: Computers are for disseminating information. If secrecy is more important than dissemination, then don't use a computer.
Umm... I think the number of things not leaked by Snowden far surpasses the number of things leaked by him, and the things he didn't leak weren't not leaked because they weren't on a computer... they just weren't in the files he accessed.
I think the NSA definitely seemed to hit a point where they calmed down. At first there was a palpable sense of "O NOE" as the NSA scrambled to discover what exactly had been stolen. And then just as suddenly, it seemed to go away. I don't think the really shocking stuff was exfil'ed by Snowden. I won't speculate what wasn't revealed, except that I think it's telling that the NSA has been asking for the ability to award high-level mathematics commendations to its workers without explaining why they deserve the award. Not much in Snowden's documents seem to fit that particular bill. The NSA isn't going to want to award the Fields Medal to someone for 0day exploits, weakening crypto infrastructure, or friggin USB hacks.
<At first there was a palpable sense of "O NOE" as the NSA scrambled to discover what exactly had been stolen. And then just as suddenly, it seemed to go away.>
I think a big part of them not commenting on the issue was that they didn't know what Snowden had and they got caught in a number of lies early on.
nsa shill:everyone should use curve25519
hacker news:downvote to hell anyone who disagrees
that off my chest.
"protocol level" is the order of the bits. for example the octets in the tcp protocol.
"rearranging the payload" in this case is moving around those octets. and while of course it would work, is a bit simplistic. (so yeah, yek not far off)
the "Best" solution imho is to chain more than one cipher together. "like truecrypt does". for example, if you are using a pre shared key, use that key to encrypt the dhe with a symmetric cipher and a nonce. or some other non trivial obfuscation.
but the point is valid, even simple obfuscation requires human intervention, which breaks mass surveillance en mass.
Nobody is secure against their home country government, which has the power to both:
-- Mount black bag attacks on your premises.
-- Snoop the networks of internet service providers.
On the other hand, most or all of the following are true of my keys-to-the-kingdom email account (the one from which most other passwords can be reset):
-- It is very unlikely to be hacked by any information guesses of my password, social engineering of my "secret question" information, shenanigans with my mobile phone, etc.
-- It is with a provider who seems less likely to get hacked than even Yahoo.
-- It has security that would be hard to brute force.
-- It has a unique password that I don't use on other, more vulnerable accounts.
Even so, I assume that all my communications are available to the US government, at least by after-the-fact subpoena if not actually real time.
->Nobody is secure against their home country government, which has the power to both:
-- Mount black bag attacks on your premises. -- Snoop the networks of internet service providers.
Indeed, not even our governments are safe from the government. like when they forced backdoors in juniper routers, then used juniper routers in government facilities. resulting in widespread compromise of most if not all government facilities and the loss of billions of dollars r&d advantage.
At least they put tons of effort into securing the voting system and didnt let a russian stooge get declared leader of the free world tho. so there is still some hope.
for us tho. encrypted at rest, and intrusion detection generally good practice.
Something is nagging me for quite some time. DJB invented curve25519, which is advertised to offer 128 bits security. On the other hand, DJB also said 128 bits are not enough for symmetric crypto. https://cr.yp.to/snuffle/bruteforce-20050425.pdf
This makes me a bit uneasy: would a parallel brute force search would be much more difficult for elliptic curves than it would for symmetric cyphers? Why? By the way, a similar problem arises with poly1305. I'm missing something.
Well, curve25519 uses 256-bit keys (actually, a few bits less than 256). The parallel brute force search should take time similar to that of a 256-bit symmetric cipher.
As for poly1305, it actually uses not one, but two separate 128-bit keys. The authentication tag computed using the first key is encrypted with the second key. For a brute force search, it should be as hard as breaking something with a single key of around 256 bits.
A batch attack on 128 bit symmetric lowers the total cost to below 2^128 computations to successfully decrypt messages. The cost approximately follows the birthday paradox math, and thus becomes cheaper at a slightly faster than linear rate given a higher volume of ciphertexts.
A batch attack on asymmetric ECDSA like curve25519 costs MORE than 2^128, it just grows logarithmicly instead of linearly.
A contrary way of looking at this is that we can expect standard crypto implementations to switch to randomizing chosen primes. If you had been rolling your own, would you keep up to date with the state-of-the-art and switch, too?
Most hand-rollers probably won't, IMHO, because they don't have enough resources to spare a person to keep up with the art, let alone keep their own implementations up to date with it.
> ...I was someone holding solutions to cherry-picked primes for well-understood algorithms in wide use, I'd be complaining loudly every time someone wrote a bespoke library too.
Sure. This is pretty standard adversarial stuff, though. It's up to researchers to figure things out and publish. I don't see any evidence that this isn't happening. This article is a perfect example!
What if 2048 bit was the standard? What if people used multiple passes with different primes? (That may not give any improved security, I'm. not familiar with the details).
It seems like a massive waste of energy and money to crack a few primes if the advantage you gain can be erased so quickly?
I agree it seems like writing custom bad crypto is way better than good crypto that can be broken in bulk. (Using both kinds together seems clever). A good analyst can likely break my homemade crypto in twenty minutes, but that's orders of magnitude better than using a compromised strong crypto!
Another question: Are good primes that hard to come by? Shouldn't apps generate new primes instead of re-using old ones?
Last question: are there good cryptos that aren't sensitive to a state level actor that can factor primes in linear time?
Yes that's only useful under the original assumption of the article: that a state level actor must use a nontrivial part of its budget to factor one prime. The back of the envelope calculations if they did classical factorization is that one or a few 1024bit primes were possible but 100 were not. Of course next year they will have twice the CPU time, or a better quantum computer, so it's a short term win.
Thanks for saying exactly the same thing that's come to my mind over and over again. I would totally roll my own crypto on top of an existing cryptosystem if I could and if someone with this much computational power was in my threat model. That way I would get the best of both.
Same goes for hashing by the way: I'd totally stack multiple hash functions on top of each other too. But in this I'd only stack cryptographic ones that have not yet been broken; I wouldn't roll my own.
You never get the best if you roll your own. The problem is that there are many broad classes of attack that cover wide ranges of knowledge. If OpenSSL can't get TLS right, how do you expect to get your home-rolled crypto right?
If you choose to roll your own, you will almost certainly have more vulnerabilities that are easier to find. Your gamble is that agencies like the NSA are already holding vulnerabilities to the common libraries, and that it's more expensive for them to break your weaker crypto (it will be weaker even if you are world class) because they have to take time to break it.
Edit: stacking multiple hash functions naively is a bad idea as well. As a thought experiment, let's say that I have a hash function that is effectively (because the NSA broke it) as useful as converting the input to all zeroes. Stacking more hashes on top of that hash won't help you, because it will still be trivial to find collisions.
>> I would totally roll my own crypto on top of an existing cryptosystem [...] That way I would get the best of both.
> You never get the best if you roll your own.
Did you even read my comment? I'm not talking about my own crypto on its own, I'm talking about my own crypto on top of an existing one. Two layers is at least as secure as a single one of either layer. It's as simple as that. There's just no argument to be made for your side here.
-------------
EDIT: since you added an edit:
> stacking multiple hash functions naively is a bad idea as well
Really now? Reading on, though, you say:
> Stacking more hashes on top of that hash won't help you
Oh, I thought you said it's "bad"? Now you're just saying it "won't help". But that's neither a fact, nor the question in the first place. The question is whether it will hurt, and the answer is that it does not. Again, this is common sense. There's no argument to be made for your side against it.
> Two layers is at least as secure as a single one of either layer.
This is not uniformly true for cryptosystems--it is not naively the case that P(Q(X)) is a secure form of encryption, just because P/Q is. A contrived example is when P and Q are inverses (so P(Q(X)) is plaintext), but it should be obvious that if P has the wrong interaction with Q it might make some of the message easier to attack.
> The question is whether it will hurt, and the answer is that it does not. Again, this is common sense.
It can hurt. It's subtle, but consider if a hash in the middle has a distribution issue (extreme case--hash in the middle maps everything to 0, now your entire hash stack is broken). In short: stacked hashes are no stronger than the first hash in the sequence (collision there = collision in the stacked algorithm) and have the potential to be weaker.
> This is not uniformly true for cryptosystems--it is not naively the case that P(Q(X)) is a secure form of encryption, just because P/Q is. A contrived example is when P and Q are inverses
If P and Q are inverses, then Q is not secure, because you could just apply P to its output.
The same holds true for encryption: if you have two independant keys K1 and K2, then if Mallory can crack P(Q(X, K2), K1), then she can crack Q(X, K2) just by picking a K1 at random and computing P(Q(X, K2), K1).
> A contrived example is when P and Q are inverses
I feel like you should already realize this (in which case I don't get why you're posting the comment), but while that's a cute mathematical existence proof, it's totally irrelevant as it's not something that can just happen out of the blue. Ciphertext looks random; you can't just reverse randomness without having the key/seed. So that's impossible in practice unless you've either somehow (a) broken the crypto, or (b) used related keys for both algorithms (which is obviously stupid and not something you would do if you thought about this for 5 seconds) or (c) something else silly along those lines, all of which even rudimentary knowledge of cryptography (or one might even argue, common sense) would prevent.
> It can hurt. It's subtle, but consider if a hash in the middle has a distribution issue
I don't know when you read my comment, but I edited it (I think) some ~15 minutes before you posted your comment to clarify that I wasn't referring to stacking arbitrary hashes. Read it again. I was referring to stacking hashes that are already thought to be cryptographically secure.
> I don't know when you read my comment, but I edited it (I think) some ~15 minutes before you posted your comment to clarify that I wasn't referring to stacking arbitrary hashes. Read it again. I was referring to stacking hashes that are already thought to be cryptographically secure.
It still doesn't make a difference in my argument. My point is that you gain no added strength via a stacked hash implementation because it's as weak as the first hash in the sequence, and that it is potentially worse because you can also attack it via attacks on later hashes in the sequence.
A stacked hash is as weak as the first hash in the sequence--this should be obvious. A collision in the first hash function will obviously cascade into all later ones, so your stacked hash function is as weak, or strong, as the first hash function. That means you gain no strength.
What I was showing via an obvious/contrived example (to keep the math easy) was that it is also possible to attack a stacked hash via weaknesses in later hashes in the sequence. I wasn't (I thought obviously) implying that you'd intentionally choose a hash that was weak for a middle one--but there are all sorts of hashes we once thought were secure that we don't think are secure anymore.
> I feel like you should already realize this (in which case I don't get why you're posting the comment), but while that's a cute mathematical existence proof, it's totally irrelevant as it's not something that can just happen out of the blue.
Fundamentally, all modern crypto relies heavily on math. I made a "cute mathematical existence proof" to make it obvious how stacking ciphers can weaken an encryption system. The reality is that exactly how ciphers interact is a subtle and hard to measure point, but it isn't safe to assume that composing cryptosystems will be as secure as either cryptosystem on its own, because features of the two systems could interact to weaken the overall security of the cryptosystem.
> It still doesn't make a difference in my argument. My point is that you gain no added strength via a stacked hash implementation because it's as weak as the first hash in the sequence
God, I wish really I could downvote replies.
Nobody said you should be applying the hash functions in sequence. There are at last 3 obvious approaches: (1) applying the functions sequentially, (2) concatenating their outputs, (3) XORing their outputs. None of these takes rocket science to figure out, and some 5 seconds of thinking would easily rule out #1 and #2 as inferior to #3.
Honest question: did you even spend 5 seconds actually thinking about what I wrote before deciding I must be wrong? I'm not sure if you realize this, but when you reply so confidently without thinking, you (and many others) active harm the whole field of infosec. I'm so frustrated and fed up with you and so many other people's overconfidence and lack of willingness to think for 5 seconds when it comes to cryptography.
>> I feel like you should already realize this (in which case I don't get why you're posting the comment), but while that's a cute mathematical existence proof, it's totally irrelevant as it's not something that can just happen out of the blue.
> Fundamentally, all modern crypto relies heavily on math. I made a "cute mathematical existence proof" to make it obvious how stacking ciphers can weaken an encryption system
Again: are you reading and thinking? Or are you just writing?
You're simultaneously literally claiming that two secure ciphers can be combined to result in an insecure cipher when their keys are generated independently. This is far more astonishing than the claim that the ciphers you're using are actually secure in the first place. You're already accepting the latter despite any sort of proof, yet you're bothered by the former? Hell, you haven't even shown shown this is possible for any pair of secure ciphers; your "example" was missing the most crucial part of the cipher -- the key. The whole argument is so crazy it's just utterly ridiculous.
> here are at last 3 obvious approaches: (1) applying the functions sequentially, (2) concatenating their outputs, (3) XORing their outputs. None of these takes rocket science to figure out, and some 5 seconds of thinking would easily rule out #1 and #2 as inferior to #3.
This is wrong. Concatenation would be harder to attack than XOR. Finding two things which hash to to two particular values in two separate hash functions is necessarily harder than finding two things which will hash to values which will XOR to the same value--almost a priori. You replace a double collision (across two hashes) which is very unlikely with an XOR collision, which is going to be exponentially easier.
> You're simultaneously literally claiming that two secure ciphers can be combined to result in an insecure cipher when their keys are generated independently. This is far more astonishing than the claim that the ciphers you're using are actually secure in the first place.
Since you seem to want practical examples on recent crypto: consider meet in the middle attacks on 2DES as an example of why combined cryptosystems are not necessarily as strong as you'd imagine. It's admittedly a weak example--still stronger than 1DES, and an old system. Fundamentally, combining cryptosystems, even with separate keys, gives you a new cryptosystem which requires separate analysis.
> Hell, you haven't even shown shown this is possible for any pair of secure ciphers; your "example" was missing the most crucial part of the cipher -- the key. The whole argument is so crazy it's just utterly ridiculous.
If I had a good attack on RSA + ECC, I'd be writing a paper about it. I'm gonna posit that if that's the kind of proof you want to believe you're "wrong", you'll remain happily "correct" in this scenario.
> This is wrong. Concatenation would be harder to attack than XOR. Finding two things which hash to to two particular values in two separate hash functions is necessarily harder than finding two things which will hash to values which will XOR to the same value
No, you're the one who's wrong. You're assuming the hash is secure and then trying to brute-force it. But the entire discussion is not about brute force; it's about when an adversary breaks the hash, i.e. one who is able to produce multiple inputs with the same hash much more quickly than with brute force. This means they can attack the hashes independently, whereas if you XOR, they can't do that. Heck, if you XOR, the probability that they'll be able to tell which algorithms you used already becomes astronomically low, let alone them breaking it.
> Since you seem to want practical examples on recent crypto: consider meet in the middle attacks on 2DES as an example of why combined cryptosystems are not necessarily as strong as you'd imagine.
Again, you're wrong. You said you were "making it obvious how stacking ciphers can WEAKEN an encryption system". All you proved is that it isn't twice as strong. I never claimed nor even imagined that it was twice as strong. I merely claimed that the probability of it being WEAKER is astonishingly lower than the probability of the crypto layers being strong in the first place. Why do you keep changing your arguments?
>> Hell, you haven't even shown shown this is possible for any pair of secure ciphers; your "example" was missing the most crucial part of the cipher -- the key. The whole argument is so crazy it's just utterly ridiculous.
> If I had a good attack on RSA + ECC, I'd be writing a paper about it. I'm gonna posit that if that's the kind of proof you want to believe you're "wrong", you'll remain happily "correct" in this scenario.
Way to keep changing the topic just to win the argument. I just pointed out that your counterexample ciphers didn't even have independent keys, for God's sake!! Instead of accepting that you made silly mistake, you're spreading meaningless FUD. Why can't you just accept you made an error instead of giving me this nonsense? Are you just a troll? If you keep trolling don't expect me to respond.
If you have a combined hash that is XYZ = weak_hash(m) XOR strong_hash(m), then you still have a birthday collision attack available. Just keep permutating the message in various ways, and in 2^(hash length / 2) operations you have two identical hashes with different inputs.
Edit: yes, this isn't new, but it is strictly weaker than to append the two hashes. It increases the difficulty as you have two hard targets you must hit with the same input, vs one target, which may even be weakened.
You're also assuming the hashes are fully uncorrelated. If the designs are similar, there could be a correlation between the two which biases the output, such that some bits often will be the same or different in certain ways. This can drastically reduce the number of possible outputs in known ways, and could even enable cryptanalysis to break it faster than bruteforce if part of the weaker hash counteracts parts of the other.
You also forgot timing attacks and other sidechannels in layered encryption.
> If you have a combined hash that is XYZ = weak_hash(m) XOR strong_hash(m), then you still have a birthday collision attack available. Just keep permutating the message in various ways, and in 2^(hash length / 2) operations you have two identical hashes with different inputs.
You understand what "good enough" means, right? Brute force is irrelevant if your hashes are long enough (e.g. say, 256 bits). Again, like the other guy, you're confused: we're trying to guard against broken hashes, not brute force attacks. Brute force attacks are trivial to guard against just by changing the length.
> Edit: yes, this isn't new, but it is strictly weaker than to append the two hashes. It increases the difficulty as you have two hard targets you must hit with the same input, vs one target, which may even be weakened.
I thought I already explained this, but I'll explain it again.
For a brute force attack, sure, it's weaker. But like I said, that's already trivial to guard against already, so that's irrelevant.
For a broken hash, it's MUCH STRONGER, since in the concatenation case, both hashes can leak information about the input (meaning breaking one can help break the other), whereas in this case it's the opposite: the adversary won't get any information about the input unless he breaks both hashes simultaneously.
You're thinking about the problem wrong.
> You also forgot timing attacks and other sidechannels in layered encryption.
Somebody already mentioned this in another comment chain. I initially thought of it but then forgot it, yeah. But it's just something to watch out for, not an argument for not doing it in the first place. It's also trivial to guard against if you apply a standard layer first (which doesn't preclude applying another standard one last, and putting your own layer in between).
Common sense is not so common. I would certainly find it very useful if the cryptographic community developed best practices for how to stack algorithms.
> Common sense is not so common. I would certainly find it very useful if the cryptographic community developed best practices for how to stack algorithms.
Here's the list of best practices for encryption:
1. Have at least one well-established & accepted cipher (e.g. AES).
2. Generate the keys for all the layers independently.
> There are more points to consider than that. For instance, which order should you apply the ciphers in?
No. The order shouldn't make any difference unless for some reason you're sending extra data in cleartext that is encrypted with one cipher but not the other. This is because the output of the standard cipher (e.g. AES) would look random, so that implies the final output must look random, and hence they won't be able to tell there's another layer on top just based on the order of the ciphers. That is, unless they've already broken the other standard cipher (in which case now you're only dealing with the custom layer regardless). If the final output isn't random, it means you're partially reversing the standard crypto, which, as I said above, cannot happen unless you've broken the crypto or avoided using independent keys.
Edit: I suppose the theoretically optimal thing to do may be to apply the standard cipher last, to absolutely, positively ensure that the adversary is forced to break that before they even know you have another layer underneath (to avoid parallelizability of breaking both). I can't imagine this ever being worse. But at this point we're talking about theoretical optimality; from a practical standpoint I don't see this mattering. But at the same time since I don't have an argument for doing it the other way, you might as well always do it this way.
I would suggest the standard cipher first. Because, if the home brewed one is used first, it may leak information over side channels.
Another concern is that if the home made cipher creates a cipher text with differing lengths depending on the content of the plain text, the standard cipher will not be able to obscure that length.
> I would suggest the standard cipher first. Because, if the home brewed one is used first, it may leak information over side channels.
Ahh! I remember realizing this once but then I completely forgot about it. It's a good point, thanks for mentioning it. The thing to note here though is that the only side-channel attack here is the time taken for the encryption to occur, since we're talking about networks (and not physical penetration of the system's environment)... which is admittedly nontrivial to defend against with modern CPUs, but which is not quite as hard to do as it might seem, if by side channel people think of the same thing I normally do (e.g. E/M waves from the monitor or something).
So maybe apply a standard layer initially, add your custom layers, then top it off with another standard layer?
> Another concern is that if the home made cipher creates a cipher text with differing lengths depending on the content of the plain text
I guess I assumed it was obvious you would never do this because it's common sense if you know even basic cryptography, and as far as I know, this is literally the only possible failure mode with regards to information leakage in the ciphertext itself, so it's not like you have to worry about other similar situations either. (But do correct me if I'm wrong.)
The odds that your homegrown function is an inverse to the dank-dispensary-function you've layered below it are as low as somebody cracking dank-func without insider knowledge. Dank-func would be worthless in all situations if it were the case.
Unless state-actor is willing to spend time decrypting homegrown-func, you have succeeded.
I could easily imagine @Taek (working for NSA) reading your comment and shitting his pants, realising you just touched on the holy grail and wanting to put you off the idea of you putting your own custom crypto on top of industry standard because that way they will have a much harder job than right now where everybody uses the same crypto which they are already specialist at breaking (which we don't know now, but in 30 years they will acknowledge it, and acknowledge that they in 2016 had people roam the Internet forums and school developers about how bad homebrewn crypto is because not using homebrewn crypto makes their job a lot easier).
That's a joke in regards to @Taek, but it could very well be. It must be a lot easier for NSA and all other intelligence agencies when everybody uses the same crypto tech.
> I could easily imagine @Taek (working for NSA) reading your comment and shitting his pants, realising you just touched on the holy grail and wanting to put you off the idea of you putting your own custom crypto on top of industry standard because that way they will have a much harder job than right now where everybody uses the same crypto which they are already specialist at breaking (which we don't know now, but in 30 years they will acknowledge it, and acknowledge that they in 2016 had people roam the Internet forums and school developers about how bad homebrewn crypto is because not using homebrewn crypto makes their job a lot easier).
+1 it's funny, I almost feel the same thing myself. I literally have been wondering why otherwise sane people argue against something so blatantly obvious EVERYWHERE I look online. EVERYBODY says don't roll your own crypto and downvotes you to hell if you suggest you're going to do it, yet it's quite obvious that multiple layers of encryption are better than a single one. Sometimes I almost feel like everyone works for the NSA except me or something.
It depends on if only one of the layers leaks information or not. If the custom layer leaks and the other doesn't, the custom layer is making things worse.
When you take the argument that a big problem with roll your own crypto is the tendency for the implementation to be naive to a bunch of ways for information to leak, well, there you go, gluing a competent implementation to an incompetent one compromises the competent implementation.
What we need is a construction that is as secure as the strongest underlying primitive. For instance, for symmetric encryption: Let M be the secret message, C1 the first cypher, C2 the second cypher, K1 and K2 two randomly generated, independent keys. Oh, and a nonce, but let's ignore that for now.
I Think it is easy to prove that C1(K1, C2(K2, M)) is at least as hard to break as either C1(K1, M) or C2(K2, M). Because if one of the cypher is easy to crack, the other can still work.
Hashes are different, because they're not reversible. In this case, a bad hash could indeed project the input space into a smaller output space than expected, and previous or subsequent hashes cannot reverse this mistake.
I did specify that it's a tradeoff. If you are more worried about someone having a cache of exploits on all the popular crypto than you are worried about someone taking the time to break your home crypto, then perhaps you should roll your own.
But understand that the amount of effort required to break home grown crypto in the first place is usually orders of magnitude lower than to break standard crypto in the first place.
By the same token the amount of effort required to break not one but hundreds of thousands even millions of different home grown crypto becomes orders of magnitude higher.
No, you'd just develop automated processes for identifying the most common mistakes. Home-growers would still likely all be using the same building blocks. And making the same classes of mistakes.
The dragnet probably wouldn't be nearly as effective, but the NSA would be far more likely to be able to compromise all priority targets.
What if the "stacking" is simply using multiple hash functions in parallel, i.e. not storing one hash but a bunch of hashes all done with different hash functions? Would that not make finding a collision that works in all hash functions nigh impossible, with the trade-off that you're potentially giving away more information about your input data?
I think the fact is that in most cases, you can't do much about government actors, especially not the US. All you can do is use the best crypto that's available and hope that the government hasn't cracked it yet (but you should always expect that the message will be as good as plaintext in 30 years).
If you roll your own crypto, you're liable to have the guy down the street spot an issue in your custom implementation and walk out with all the goods before you realize what happened.
Maybe the most technologically advanced spy agency in the world can decrypt some industry-standard encryption if your messages are interesting enough and it justifies the time-cost (police, prosecutors, and laypeople still can't), but probably someone on HN can pick out errors with your custom cryptography and render it useless, at least until you've really spent a great deal of time testing and refining it. That's the difference, IMO.
> If you roll your own crypto, you're liable to have the guy down the street spot an issue in your custom implementation and walk out with all the goods before you realize what happened.
Isn't it quite obvious that you can have multiple layers of crypto? Like your own on top of a standard one? Why do you (and a lot of people, not just you) set up a silly strawman argument whenever talk of rolling your own crypto comes up in any online forum?
Unless you are just encrypting stored information ('storing secrets'), I don't see what this gets you in the long run as an approach for 'communicating secrets'.
- at least one comment in this thread points out the fact that the very machines you use are likely compromised at multiple levels.
- you are communicating over a custom stack, e.g. your own chat system. Either this stack is a secret or published. If former, see above. If latter, the custom mix algo is also public.
I think the approach you advocate could work against non-state actors (Surveillance Inc., PIs, etc.), but if your local state actor with legal ability to stick a bag over your head is interested, it is useless.
In context of OP and RT decryption by NSA, your signal would raise flags given that known methods fail to decrypt it. So you'll get flagged, and "your" machine will likely gets to chat with its government peers via factory installed backdoor to spill the beans.
Given all this, the consensus advice of "don't roll your own" possibly boils down to practical advice: the standard is vetted and is effective against non-state actors, and your innovations may simply reduce the standard protection, but they will not gain you anything (since the non-state actor is already frustrated by standard crypto anyway.)
If your game is NSA level actors, you really shouldn't be storing and communicating with computers.
> your innovations may simply reduce the standard protection
I've spent a ton of time debunking this myth. Read my other comments; this can only happen if you don't use independent keys or if you somehow break the encryption. I'm not going to waste more time explaining it.
Because many people would say "Why bother?", from both ends of the argument. Someone who is naively rolling their own crypto would say "Why should I waste time double-crypting everything and implementing wrappers when I'm investing all this time in my own super-cool crypto which no one can crack because I'm so awesome?" and someone else who is recommended against it would say "Why bother spending all that time duplicating work that's already been done for you by experts when your custom layer is just going to get compromised anyway?"
So in short, for practical purposes, it's just not efficient, in either programming time or performance time. If you want to double-wrap, as it were, more power to you, but be aware of the costs.
If you are going to double-wrap, it'd be best to wrap the payload in your custom package first and then standard second on the outside, so someone has to overcome the first barrier before they can find anything out about the custom/second barrier. That's a deterrent that would probably require man-time and sufficient interest if indeed the NSA can automatically decrypt current standard crypto.
I also think there's a little bit of confusion here. When people say "Don't implement custom crypto", they usually mean "Don't implement your own crypto library to implement common standards and algorithms". Most people don't think they can create crypto that rivals the publicly-used cryptographic algorithms out there, so the issue is usually about whether that person should implement AES directly or through GnuTLS, etc. If the NSA has holes in AES and you implement it correctly, you haven't really accomplished anything. This is probably what you've been talking about, but I want to make sure that point is clear for any other readers.
And just for a counterpoint, I remember an exchange on here by tpatceckd and cpercival many years ago where Thomas congratulated Colin on being one of the few to actually correctly implement industrial-grade crypto standards without using one of the major libraries. If either of them are reading and I'm misremembering, feel free to correct me, but the issue is less about stifling things or leaving backdoors for the powerful than it is about preventing the types of subtle but critically damaging bugs that run rampant through basically any code that hasn't been extensively battle-tested and matured in the crucible for years.
Since encountering cryptography in the wild is a signal that you may be stumbling upon something high-value, it's definitely not the kind of thing that you normally want to be taking chances with. That risk profile that makes a tried-and-true-but-possibly-compromised-by-the-most-technically-advanced-people-on-earth implementation better than a I-just-rolled-this-at-home-so-I-know-the-NSA-hasn't-hidden-any-backdoors-in-it-but-probably-someone-will-crack-it-after-three-days implementation.
>If you are going to double-wrap, it'd be best to wrap the payload in your custom package first and then standard second on the outside, so someone has to overcome the first barrier before they can find anything out about the custom/second barrier.
Quite a typo here. Replying to clarify as edit timeout has already expired. I meant you should use the standard wrapper on the outside layer so it appears to be ordinary until someone breaks down that first layer.
You really believe the entire cryptographic community is so thoroughly morally compromised? The same community that has been investigating and fighting the NSA's wrongdoings?
Crypto is hard; coming up with sound cryptographic primitives is highly nontrivial if you do not have the right training. Even if you have the mathematics to back it up, safely implementing the mathematics is in itself tricky, with all kinds of side channel attacks potentially breaking your scheme.
It's not necessarily that the whole community is compromised, it's that the goals of an organization devoted to signals intelligence align here with the guidance of the community, fostering what converges towards a monoculture.
There's some serious confusion here, I think. Maybe it's local [;)] but here goes ...
> Furthermore if I was someone holding solutions to cherry-picked primes for well-understood algorithms in wide use ...
Well, the article notes:
> For the nerds in the audience, here’s what’s wrong: If a client and server are speaking Diffie-Hellman, they first need to agree on a large prime number with a particular form. There seemed to be no reason why everyone couldn’t just use the same prime, and, in fact, many applications tend to use standardized or hard-coded primes. But there was a very important detail that got lost in translation between the mathematicians and the practitioners: an adversary can perform a single enormous computation to “crack” a particular prime, then easily break any individual connection that uses that prime.
That prime is the "dhparam.pem". It's used in bootstrapping the encrypted connection. And it's my understanding that this is entirely distinct from the issue of "cherry-picked primes for well-understood algorithms in wide use". This vulnerability is simply using "standardized or hard-coded primes". Cipher vulnerabilities are about ...
And that's where my understanding ends. Maybe someone can complete the argument. Or sort me out ;)
Perhaps the solution is an audited framework for rolling your own crypto. This way you just need to write one small module. But everything else is handled using secure professional ways - eg key exchange, memory wiping, etc.
...the conventional encryption techniques are also potentially widely exploitable by state-level actors.
That really depends on how you define "conventional encryption techniques." As is so often the case, this proposed line of attack isn't against the crypto proper, but against habitually poor implementation details. Human laziness, really. They're not picking the locks, they've found a way to steal the keys.
I wonder if a good way to go would be to implement standard encryption (AES in most cases), and then take that encrypted payload and encrypt it with your custom code that you designed.
So even if it's actually true, and that you shouldn't roll your own encryption in practice, even if they do break yours they'll still have to get through the AES as well. So it's a no risk addition.
It's never a no-risk addition. Perhaps your custom code is buggy and ends up adding in random memory contents that happened to contain both your AES key and your custom crypto key, into the message payload ala heartbleed.
You're making the argument "If I apply good crypto plus crypto which might or might not be good, the worst case scenario is that I end up with crypto which is at least as good as the good crypto."
You're the perfect example of Dunning-Kruger, where your skills at crypto are really bad so lack the knowledge to evaluate your crypto skills. You're exactly the kind of person that advice is aimed at.
This comment boils down to: you're wrong and you're stupid.
Also, and quite ironically, you have misinterpreted and are misapplying the D.K. effect. Can you care to explain why the above wouldn't be true? Does an extra layer of bad crypto diminish the effectiveness of the good crypto? It doesn't seem like it would, and OPs statement reads more like a reasonable logical assumption, but I know very little about crypto, which is why I ask.
Possibly. If your rot13 takes longer to rotate a Z than it does and A, then it will leak your Zs through the AES. Similarly, you might accidentally write your own rot13 heartbleed.
> You're making the argument "If I apply good crypto plus crypto which might or might not be good, the worst case scenario is that I end up with crypto which is at least as good as the good crypto."
I'm really not seeing how you pulled that argument out of the GP's comment.
I think he means that we are encouraged by propaganda to assume that all crypto is broken, so that we do not even attemp to use it as that would just take more time to implement with no additional security for us.
"Why should I use Signal if it is no more save then WhatsApp but just less convienient?"
I sat in a two day class this week put on by people who claimed to be former intelligence agency types. They told a lot of entertaining stories about how far they'll go to both protect their own secrets and gain everyone else's.
Some of their advice seemed useful but some of their suggestions absurd and conclusions naive. The most interesting thing was that challenging them on their assertions (anti-virus is only n% effective) didn't result in any facts to back up the assertions. Instead they'd just mock the challenger and say things like "no matter what you do we can crack your systems in n minutes, regardless".
In fact one of their suggestions was to simply fire anyone who didn't have like-minded world views. People who didn't just nod along with whatever they had to say were the problem and needed to go. If that's how the intelligence community works I can see why they are in an echo chamber that justifies their any means necessary approach to intelligence gathering.
People complain about Silicon Valley's obsession with culture fit, infantilizing its employees, and supplanting an actual adult life with 'perks' but truth be told only the last one is a SV invention. (government standard raises and hours are probably the only good thing about such jobs)
Fun story: when logjam broke, the company I was working for at the time immediately changed their SSL configs to require 2048-bit...then had to change it back the next day, because we had clients with ancient java versions that could only handle up to 1024. So we made do (and as far as I know, this company still makes do) with 1024 and a randomly generated dh_param.
I had soooo much fun with that when I found out that one of my clients was connecting to a Logjam-vulnerable server and the OpenSSL-based code connecting to it suddenly failed without explanation after a security patch.
I found it by accident because I was able to reproduce with openssl s_server, which happened to be presenting a weak DH key by default as well.
Just in case the above is not clear enough, newer versions of OpenSSL refuse to connect to anything with a weak DH key and nothing was showing up in our logs, making it extra confusing.
I can't remember where I read a longer explanation of this but a quick google search got me to this blog entry http://www.scottaaronson.com/blog/?p=2059 where the topic comes up. I think that for the most part what we civilians know as cryptography is often either a red-herring, a trick, or woefully out of date from the point of view of what the professional state actors know how to do. Anyway you get that impression from declassified documents. Crypto works well enough for commercial and civilian purposes but has no impact on militarized hacking I think. Please correct me if I am wrong but I think this is all by design, even going so far as the NSA putting backdoors in CPUs and suchlike. If they send a company a national security letter that insists on compliance, the company not only has to comply but also can't legally disclose to the public that they complied, so we can't really know the truth here.
Admittedly commenting without having jumped into the content here, but from what you're describing, it sounds less about how state actors are "breaking so much crypto" and more about how state actors are "breaking so many cryptographic implementations."
This might also explain the possible shift away from compromising algorithms (with perhaps the notable exception of Dual_EC_DRBG): the IC knows well enough that everyone's going to do just enough things wrong in _implementing them_ that it's perhaps more beneficial to promote unweakened ciphers in the grand scheme. Heck they could even promote sound implementation practices and have the confidence to know any one target will gum something up just enough to give them a way in.
But who knows. Odds are still decent that I'm wrong with the compromised algorithms assertion.
Not only that, remember the Diebold debacle around GWB election? He was one of the people who hacked that machine, and researched other voting machines as well. He's running a course on Coursera called Securing Digital Democracy https://www.coursera.org/learn/digital-democracy I can highly recommend it.
NSA is just the US Foreign Intelligence and Government protection agency [1][2]. They do nothing, and are expected to do nothing, to protect domestic civilian assets.
NIST [3] is possibly the closest agency I'm aware of that exists to disseminate knowledge and standards to civilian organizations, but they're more interested (per their website, anyway) in simply advancing US business to be competitive in the global marketplace. That, and we've seen that they will bend to the NSA over such critical things as which elliptic curve ought to be recommended for use.
If I go to the NSA website it says "How We Protect the Nation" - So I infer that they do some work to protect something, but the US government was hacked "5.6 million fingerprints stolen in U.S. personnel data hack" https://www.theguardian.com/technology/2015/sep/23/us-govern...
The Democrat and Republican computers recently hacked, you will say they are not specifically government since they are political parties.
Many more US government hacks and leaks on a large scale I don't need to list.
So I assume from your answer that they are tasked at protecting USA government assets, but it seems they don't do a good job.
Well, the NSA does military signals intelligence. They protect US interests by compromising foreign assets, and by intercepting communications. In recent years, there have been efforts to expand its defensive role. But it's rather like the career transformation from "black-hat hacker" to security professional ;)
Nothing is 100% effective, and there will always be a long tail and diminishing returns on any economic organization, such as the NSA.
Additionally, there are simply too many unknowns to say that they're not doing a good job, and I think that's the nature of intelligence work. Lots of secrecy makes oversight and external efficacy judgments difficult to make; there's just not enough data.
It's like trying to say that a chess player is doing poorly when you don't know the size of the board, the number or types of pieces on either side, the positions, or really anything but an incomplete glimpse at the captured pieces of both sides. And they don't want you to see those pieces either, so even that is probably incomplete.
If they were doing their job, would you expect to hear about it? See the comments in this thread about Enigma. It's an interesting thought exercise; how do you tell the difference between peace which is organic and peace created through really effective intelligence?
I'm not arguing that the NSA is ineffective. I'm just saying that I haven't heard of much on the purely defensive side. Or at least, that's what I get from Bamford and the Snowden stuff. Maybe their defense is just unreported.
You assume incorrectly and should read the Information Assurance section on their page:
"NSA to secure National Security Systems, which includes systems that handle classified information or are otherwise critical to military or intelligence activities."
That does not include civilian networks or non-DoD government networks or any system that isn't designated a National Security System. It definitely isn't "protecting USA government assets" in general.
also... Snowden and Manning and Assange. and what those leaks entail about the "smartness" and eliteness of the NSA. People complain about Hillary having a private (well, non-government managed) email server. Yet look at all the leaks/hacks from government-owned/controlled/managed systems, and corporate.
My (yes, admittedly anecdotal and personal) perception is that Google/Gmail is more secure/smart/elite than the NSA. Based on public evidence to date.
So basically your perception is that an insider with access (Snowden, Manning) can leak more info than hackers? That's the most obvious and difficult security threat anywhere, something even Google/Gmail is susceptible to.
and yet... the supposed computer geniuses of the NSA allowed it to happen. whereas Google/Gmail has not, to date (that we're aware of, of course.) this was part of my point.
Keep in mind they have to defend against multiple state actors (Russia, China, even some much smaller countries with a lot of investment in SIGINT) who are also very competent and have a ton of attack vectors to work with.
Defense will never be perfect, however that is no excuse for not having dedicated organization training, audits and enacting and enforcing regulations strictly. While all of this may happen in bit and pieces, there is no single organization doing this and measuring their success by the number of attacks they prevent or failed to prevent.
NSA on the other hand wants backdoors and weakened cryptography tools...
So do the adversaries we worry most about use the same prime numbers as everybody else, or do they come up with their own? It seems the most likely outcome is that the NSA is able to spy on Americans much more easily than they can spy on anybody else.
The question isn't about what's possible, the question is about common practice. If everybody used their own private prime numbers this wouldn't have been news.
You might be interested in googling Suite B. It lays out guidelines for encrypting government data up to the Top Secret level. If you dig a little bit, you can find guidelines for auditing crypto implementations and also for handling/generating key material.
Briefly, Suite B is AES-GCM, ECDH, ECDSA, and SHA2.
He also blinded the public key behind two different hash algorithms[1] for display as an address (RIPEMD-160 & SHA256). The public key of an address is only revealed in the output script when spending funds held by that address. Thus, with proper wallet behavior surrounding address reuse, your EC public key vulnerable for a short period.
Now, the private key must be secured properly, but that's the case with any crypto system.
A future mathematical weakness in RSA/DSA could have been a concern, but 2 larger reasons to use EC in BTC are that a) signature computation is much faster, b) almost every 256-bit value is a valid key, allowing fast key generation that supports the (often ignored) recommendation to use a new key for each transaction.
If anyone is gonna break a lot of crypto, the NSA is. Who else has spent as much money, equipment and manpower on it? Who else has as many people working on it as they do?
Crypto-breaking hardware is very different from your average server farm. The best crypto-breaking algorithms are parallel, and you need to spend a few hundred million dollars on a suitable ASIC setup to brute force 128 bits symmetric crypto. https://cr.yp.to/snuffle/bruteforce-20050425.pdf
While the likes of Google Amazon and Facebooks may have that much hardware, they most certainly don't have that much crypto breaking hardware.
No, not GPUs. GPUs are good at floating point vector math, not the bit shuffling operations that dominate the runtime of symmetric encryption or hashing. Custom ASICs specialized for crypto computations get orders of magnitude better power efficiency than GPUs at cracking.
For instance, in bitcoin mining (dominated by the SHA-256 calculation), the best GPUs get 0.013 MH/J while current ASICs get 10182 MH/J [0], so almost a million times more work per unit of energy.
Google, Facebook, amazon machines are busy making money for them so they can pay salaries to their employees. Unlike the chums at NSA they cant steal money from your pocket in the name of fare share.
Yea pretty sure google, facebook, and amazon servers are busy, you know, running google, facebook, and amazon. Business can't afford to have resources like that NSA sitting around for test. The government throws out more hardware in a day then most business buy in a year. Literally thousands of perfectly good i7/i5s being destroyed because they don't have up to date TPM / secure boot stuff
>The government throws out more hardware in a day then most business buy in a year. Literally thousands of perfectly good i7/i5s being destroyed because they don't have up to date TPM / secure boot stuff
With a handful of exceptions you're mostly wrong. It's hard for the gov to throw things away, or even sell them to a 3rd party at scale. There's proper channels for getting rid of anything that isn't literally trash. Government being government the people who run those channels really don't like when they're not used. Especially in defense. Google "DMRO".
I work for spawar. Every single machine without TPM and secure boot is being destroyed. I wasn't even allowed to take spare extra ram as it all goes to the shredder.
Each of those companies needs enough hardware to handle their highest expected loads. But most of the time, they aren't near their peak needs, leaving a lot of hosts sitting idle.
AWS and other cloud providers face the same situation, but to another order of magnitude.
These leftovers would be enough to match what the NSA has.
> These leftovers would be enough to match what the NSA has.
They still need power to run theese, space to store them. All of that costs money. Why woold a private org break crypto if it has nothing to gain from it? And besides, people willingly give their data to Google, Facebook, Amazon. Why would theese orgs need to crack anything at all?
You're missing the point- the hosts are already there, powered but sitting idle. They are the reserve computing capacity. They must be there for when load increases.
This is why AWS exists, because Amazon needed to have 4X capacity on cyber Monday, but not the rest of the year.
I'm not saying there is any benefit to google cracking VPNs, but simply that they could, that they are an organization with enough hardware to do it.
I think you are forgetting the fact that your standard run of the mill xeons and tesla's in a typical server farm are still far slower than custom hardware accelerated circuits with no clock rate specifically designed for cracking a specific encryption system. There isn't really a justifiable commercial reason to need them on that scale at present.
They implied one of those has the machines, not the motivation or intended desire to do so. Learn to read before you start trying to call someone an idiot.
I saw somewhere that they were using custom built hardware modules to break AES keys, some kind of FPGA thing. It was cheap to build and extremely fast since it was specialized in doing just that.
We just have to look at the world of bitcoin mining to see that custom fabricated ASICs would wipe the floor with general purpose CPUs for this kind of task.
I've seen this hypothesis before. What I haven't seen is anything more concrete than "I heard about a guy who knows a guy who sold the pspice license that was used to..."
"When performing Diffie-Hellman Group Exchange, sshd(8)
first estimates the size of the modulus required to
produce enough Diffie-Hellman output to sufficiently
key the selected symmetric cipher. sshd(8) then randomly
selects a modulus from /etc/ssh/moduli that best meets
the size requirement."
The problem is
a) OS distributions ship pre-computed moduli in the /etc/ssh/moduli file. I.e. most users don't change these moduli. This facilitates pre-computation attacks.
b) These moduli are often too short (<2048 bit).
You can create your own moduli with ssh-keygen (see the "MODULI GENERATION" section in the ssh-keygen manpage).
FWIW: Here's my open bug for RHEL7 where I try to convince Red Hat to improve the situation (including more details and references):
Use a one-time pad (and distribute "securely") for anything you want to encrypt as best as possible. Consider everything else in the open already, or at some not too far off time in the future.
i'm thinking they grossly overestimate the problem of cracking a 1024-bit prime... based on my dabbling in that area its certainly possible for a network of machines to do it, and with modern gpus and good number implementations on them, i don't think that days long, or even overnight, cracking is a stretch if you have a few spare machines lying around....
Are you possibly misinterpreting the terribly imprecise phrase "cracking a 1024-bit prime" as being the same as factoring a single number with two prime factors? I don't really understand what's involved with the actual process, but there was some discussion here earlier: https://news.ycombinator.com/item?id=10391925.
Alternatively, if you do correctly understand what's being done, it would be be interesting to hear how you made your estimate. Based on other examples, I agree that it might not be impossible that the "usual approaches" could be sufficiently optimized on commodity hardware to get the 100x or 1000x gain you would need for this.
its the reverse modular exponentiation problem right?
e.g. some a is transformed into b by x^a mod y = b where y is a prime and x has special properties relative to it?
doing that once each, sharing the resulting b's then using the b from the other person raised to the power of your original input a to create a number that is kept secret (because a was kept secret) - so if you can get a back from b (x and y are not secret) then you can work out the derived secret number and its not so secret anymore
its true that for large numbers this becomes harder, and there are many a's for a given b, but iirc the properties of modular arithmetic make them identical in effect, such that finding any a is good enough to crack the key exchange.
Hasn't this been essentially known for a long time? I'm not sure why that paper was even referenced. There's no breakthrough or new info here, people have speculated about this theoretical attack since forever.
FWIW, someone well-known and widely respected in the high tech community told me many years ago that he had consulted at the NSA, and they had computers that could crack 1024-bit RSA.
The quality of information on this this thread is unbelievable, it makes me feel stupid. Are all these commenters crypto experts or do some devs just have such in-depth knowledge of the subject. Heyzeus.
The common advice in all the classic texts is that developers should not roll their own crypto because smarter people have thought of more vulnerabilities and addressed them in battle-tested code.
But news like this shows that there is an antithesis: the conventional encryption techniques are also potentially widely exploitable by state-level actors. Furthermore if I was someone holding solutions to cherry-picked primes for well-understood algorithms in wide use, I'd be complaining loudly every time someone wrote a bespoke library too. I'd be paying to publish books that recommend no one write their own crypto because it's just such a darned hard problem, especially with so many high quality alternatives out there tested and ready to go.
Granted, one should certainly have a repulsion to to writing custom crypto for all of the many good reasons, but it makes me think it's worth putting in more than the minimal effort into it, especially when lives are on the line.