Basically, NIST seems to be reasoning that because Keccack's sponge construction has no internal resistance to meet-in-the-middle attacks, the collision and preimage resistances are the same.
An upper bound on the security of Keccack is set by the expression
c + r = 1600.
'c' represents the internal bandwidth of the hash that is not directly controllable by an attacker.
'r' is the rate (bits per expensive f() function invocation) at which the message input can be processed.
Due to some basic and well-understood attacks, we know that Keccak cannot be more secure than c/2 bits.
NIST was planning to assign SHA-3-256 and SHA-3-512 having 128 and 256 bits of security respectively. On one hand, this sounds like a nice conservative overall security rating based on the (lower) collision resistance.
But NIST plans to re-tune these internal constants downwards, setting c to 256 or 512. Whereas the competition final round submission specified c = 448 and 1024 bits. The resulting speed boost is 24% and 89%.
I don't think anyone should presume ill intent here on the part of NIST. But it sure doesn't seem like a good time to propose cutting security margins down to the limit of publicly-known attacks. The idea of a hash function which outputs 256 bits having only 128 bits of preimage resistance is unprecedented.
> the collision and preimage resistances are the same.
> The idea of a hash function which outputs 256 bits having only 128 bits of preimage resistance is unprecedented.
But does the SHA-3 construction mean a collision attack will further enable a preimage attack? Most breaks in recent cryptographic hashes have been confined to collision attacks. Even MD5 (publicly) hasn't been shown to be vulnerable to a viable preimage attack.
It's worth remembering that just to count to 2^128 it'll take a 3 GHz core with a single cycle increment instruction, along with a billion of its friends, over 3 and a half trillion years.
Also, even if a 128 bit meet in the middle could be computed, it'd require an unfathomable amount of memory.
> But does the SHA-3 construction mean a collision attack will further enable a preimage attack?
Yes! This is a well-known feature of the sponge construction. Sponges generally handle this challenge by having a huge internal bandwidth, 1600 bits in the case of Keccack. No one is going to collide that!
However, this internal bandwidth effectively narrows at the points where the message input gets mixed in. Which is why the choice of c (representing the minimum width in bits) is so security critical.
But remember c/2 just represents an upper bound! If algorithmic attacks against the 24 rounds of Keccack's f() function are discovered in the future, it seems very plausible that having a little bit of safety margin in the choice of c could make the difference between it being exploitable or not.
Its deeply troublesome when these kinds of comments come up for two reasons: first and lesser: its wrong; secondly: its inconspicuously wrong. Processor speeds double approximately every generation which we can estimate is once every two years.
What do we have to count to:
2^128 = 3.402823669209385e+38
How many times can we count in a year with a 3GHz core:
3e9 * 3600 * 24 * 365.24 = 9.4670208e+16
In two years, when processing speeds have doubled?
9.4670208e+16 * 2 = 1.89340416e+17
How many years until a core can complete count to 2^128 in a year?
log(3.4028e+38 / 9.467e+16) / log(2) * 2 = 143.21
So, in 143 years a single computer will be able to count to 2^128 in a single year. That's still a long time, but its WAY WAY less than the trillions of years people quote. Add in as many extra cores/machines/datacenters/planets of extra processors and you aren't really indefinitely secure. You are secure for a limited number of years and that's it.
The Landauer limit [1] is relevant here. Even if you could theoretically make a processor that does it, it would take about 30 gigawatts of power for 1 year [2] to just do 2^128 bit flips (this is obviously a lower bound since it disregards the additional energy required to actually do the calculation). This is more than 1/100th of the world's current energy production. So it's not something that is going to be a concern for most people, and certainly not in the near future!
If the world had access to that kind of energy, I'm sure it'd be used for far more interesting things than finding a single hash collision.
Edit: and while we're on the subject of inappropriately extrapolating Moore's law, if current performance-per-watt continues to double every 18 months I'd be interested in how long it'd take to even reach the Landauer limit. I can't seem to find out how much energy it takes to do a single bit flip in a modern processor online, so I can't do the calculation.
People just assume exponential growth goes on forever. You're giving a counter argument to moores law. Moore's law is not physical law, and physical law says otherwise here.
The amount of energy needed to update a counter 2^128 times on a non-reversible computer with the highest efficiency permitted by understood physics requires the energetic equivalent of something like 2 megatonns of TNT.
The additional state that brings security against brute force preimage attacks also tends to increase security against analytical attacks, as well as speculative QC attacks. So it's not worthless, at all— but your moore's law argument is not credible.
> Processor speeds double approximately every generation which we can estimate is once every two years.
Do you have any sources to back up this claim? Max processor speed has stalled to ~4GHz since 2005. If you're referring to Moore's law, it concerns the amount of transistors in ICs, and will eventually hit a limit (not before 10 years, but not after 140 years).
I guess I don't really understand the objection to NIST's reasoning. While a larger capacity (c) would increase pre-image resistance, it wouldn't help generic collision resistance (which would always be limited by the output size). There are certainly contexts where a pre-image attack would be more damaging than a collision, but collisions can be a problem in other situations. Asking the user to figure out which security level applies to their application seems like a source of potential issues. Giving them a generic security level parameter to work with might make things easier.
On top of that, Keccak's software performance is nothing to write home about and SHA-256 and SHA-512 look a lot safer now than they did when the SHA3 competition was started. The reasonably large performance boost the capacity changes could bring about might help SHA-3's case a bit for software developers who have barely started moving away from SHA-1 (if they're not still hanging on to MD5).
While 512 vs 256 bits of pre-image security for SHA-3-512 (which the author seems to think is an unacceptable security downgrade) is interesting to discuss in an academic sense, it seems ludicrous to suggest the change would actually benefit someone trying to break SHA-3. Any adversary able to perform 2^256 operations is so far beyond modern cryptanalysis that everyone might as well just give up.
Designing a crypto primitive that generates N bits of output having only N/2 bits of primary preimage security is unprecedented. It's a radical design decision.
The competition Keccak proposals specified a healthy safety margin, NIST proposes to eliminate it entirely. Why? Is it worth being able to hash messages 24-89% faster if in return you have to make your MACs twice as large? For example, if you currently use SHA-1 and want to move to SHA-3 without losing any preimage security, instead of 20 bytes you'll need to send 48 bytes over the wire with every message.
Attacks only ever get better and c/2 is an upper bound. If weaknesses are found in other parts of the function, we may wish there had been at least some safety margin built in.
I agree that it's a new design decision, but I think there is reasonable support for making it. Hash function history has pretty well demonstrated that most implementers don't understand the difference between pre-image security, hash output length and collision resistance. And the security margin is far from eliminated entirely. Not only is there significant margin in the round function itself over the full number ofrounds, but the 256-bit security level provided by the proposed SHA-3-512 seems fairly substantial to me.
I don't quite understand your MAC output length math. As far as I can tell, at least HMAC-SHA1 security could be achieved with SHA-3-512 truncated to 20 bytes of output. From what I understand of sponge functions, pre-image security is based on capacity (more specifically, half the capacity) and that truncated output up to the desired security level is valid. The Keccak page parameter calculator claims 160-bit security can be achieved with a capacity of 320 bits and 160 bits of output.
But I will admit that changing the pre-image security level association with output length does seem like it could create some confusion among people who do understand the current hash security situation. To be honest, I would be OK with NIST making the capacity for both be 512 bits (or 576 as DJB suggested). SHA-3-512 performance doesn't suffer much, since it has the most to lose there, security levels are reasonable and implementation is simpler. I would still like a higher performing version of Keccak given that there is already little enough reason to switch to it, but the 512/576 universal capacity might be a reasonable compromise since it wouldn't make SHA-3-256 THAT much slower from NIST's proposed parameters.
At this point, your fellow software developers are pointing and laughing at you for "inventing your own cryptography". Your function doesn't come standard in any library. Every single 3rd party audit of your architecture raises a red flag about this and you have trouble finding any official documentation to back you up. (Ask me how I know about this :-)
I'm not trying to say you don't know what you're doing or that that wouldn't meet the security properties, I'm just saying that the whole point of NIST defining these standards is to save us from having to come up with this kind of thing on our own.
NIST actually does define truncated versions of SHA-2, e.g., SHA-2-512/256. But they specify a different IV so that the functions are distinct.
> some confusion among people who do understand the current hash security situation
Right. I totally agree that most of the time we should assume that collision resistance is the relevant figure, whether we can see the attack or not. But still, SHA-2-256 has has 256 bits of preimage resistance just like a 256 bit random oracle. But SHA-3-256 will have 128? Can we use SHA-3-256 to derive an AES-256 key?
Don't forget that NIST has mentioned they'll standardize variable-output-length SHA3: SHAKE512 with 20 byte output would be perfectly fine. They've also mentioned they might include MAC and AEAD standards, so I'm not convinced the situation is as bad as you make it.
I think you're on to something there with regards to the performance. Schneier was making some noise towards the end of the competition that none of the entries really improved on speed. ( https://www.schneier.com/blog/archives/2012/09/sha-3_will_be... ) Contrary to to article, hashing is used a lot, and there are a few areas where performance will be an issue for sometime (mobile space, and high-density virtualized environments) It may be that the key space was indeed deemed excessive, and of little added benefit compared to the rewards gained in greater speed.
As another mentioned further down, no one is simply going around busting up 256bit keys, and when it finally happens, it will be with a computational paradigm that will require a defense built to an entirely different set of requirements: out of the scope of this competition.
DJB posted to the NIST SHA3 mailing list a couple days ago suggesting fixing R at 1024, giving the hash a capacity of 576 and a security against preimage attacks of >= 256 bits in all configurations.
This sounds pretty prudent. It avoids any security gotchas, and will also make life easier for implementers because the hash will consume a nice round 1024 bits of data at a time for all hash sizes.
This config would have slightly greater anticipated security at 256 bits than the submission, somewhat worse at 512 bit output— but in both cases be better than NIST's proposed parameters... and the performance impact would be relatively modest.
> it should be based on completely different underlying mathematics
Wasn't that shoe-horned in at the end? There were no requirements that it not be e.g. ARX based, or AES based, etc. It wasn't until the final round that any choices were made based on being as alien as possible, where BLAKE's equal security margin and B+/A+ hardware/software performance was tossed out for Keccak's A+/D-.
Also of interest is that DJB's CubeHash was widely panned during the competition for having the exact same "weakened preimage in exchange for performance" tradeoff that is now being made for Keccak.
The weird thing for me is that I predicted skein would not be selected on the basis of it being the only finalist without the full 2x output capacity (which was a NIST requirement that DJB had bucked, and was part of the reasons cubehash had bee removed), I was so confidence in this that I bet money on it and won. :P
I once had the opportunity to talk with Joan Daemen, one of the designers of Keccak, and also of AES.
He said that he liked to do what he did, because it's fun, but he didn't really believe that it was very usefull. Because everything has backdoors.
At that time I didn't think much of it, I just thought it was kinda funny that an important cryptographer had such views about his work. But in the light of the NSA developments it's a bit different.
I have no idea what to think of it. Did he know about what the NSA did? Or was he just a bit pessimistic, but he turned out to be right?
This does of course not mean that he made backdoors in AES and Keccak for the NSA. If that were the case he probably would not be allowed to speak about it, and he did talk about it very non-chalantly.
Also, this setting does not address the security of MAC constructions based on SHA-3.
For instance, HMAC over SHA-2-256 or SHA-2-512/256 with a 256-bit key is expected to attain 256-bit security (i.e. the MAC size equals the security level).
Yet SHA-3-256 would not reach anything above 128-bit security, even though the MAC is 256 bits long; to attain 256-bit security one would have to scale the MAC size up to 512 bits instead.
Well, not so much now that SHA-3 is length-extension resistant, eschewing the need for HMAC. A standard MAC mode has to be defined anyway, and truncating the output might as well be part of it.
Yeah, Keccak-224 specified c=448. I was trying to simplify things by sticking to Keccak at 256 and 512 bit fixed output lengths, which specified c=512 and c=1024 respectively.
As a backup, ECHO (nist round 2) has the nice properties of being expensive in both hw and sw execution speed, in addition to being based on existing crypto.
Yeah I can't understand why everyone wants "fast" so badly. Something which deliberately can't be made fast is the way to ensure security. I don't really care if my SSL negotiation takes an extra 1ms or if my purchases on Amazon cost an extra $0.001 each because of the expense of the hash functions. I'd like security, thanks. When the price is so trivial, who really cares if it's 10x as expensive?
Because there are a lot of ways to make a fast thing slow in cases where "slow" improves security. And at the same thing you're not hurting performance in cases where a slow function doesn't significantly help security and performance might be important.
Making your hash function twice as slow means you've halved the number of hash operations you can perform with fixed computational power. On the other hand, in many situations the resultant doubling of your security margin makes absolutely no practical difference since the existing margin is many, many order of magnitude away from any attacker's capabilities. Making things twice as hard on an attacker sounds impressive, until you think of it like asking someone to travel 2 billion light years instead of only 1 billion. Both are ridiculously outside of our current capabilities, but anyone able to travel 1 billion light years is likely going to be able to travel 2 billion as well.
In more specific terms, making a function with 256 bit security 10x slower has a pretty dramatic impact on performance with a resulting security increase to about 259 bits of security. You have a fairly marginal security increase for making your function 10 times slower. Modern cryptography is fast, but it's not yet fast enough that running at 1/10th the speed won't be noticeable in many situations.
Would a simpler way to put it be that if you make something fast and do it lots of times, it's difficult to speed up. If you make something slow and it's done significantly less times, there's the risk that someone will make it a lot faster?
Yeah, reminds me of the CPU clock speed wars. [0] Shouldn't crypto security margin be $-oriented [1] related to the secrets / lives / etc. being protected? In backup / availability formally use something called a BIA to justify investments based on risk to operations. (A document used by IT people &| consultants to get business people to spend $ on backing up critical shit) [2]
There are applications that need to hash data as fast as possible as long as there's a perception of a reasonable security margin. ZFS uses SHA(2)-256 for data checksumming. Some crypto protocols involve HMAC as well as encryption for transmitted data. If a hash can't keep up with line rate on SAS, FC, or many-GigE links, because you wanted maximum security, those users are going to find another, faster, less examined, hash function.
The criteria for SHA-3 is not "as secure as possible". An application that needs as secure a hash as possible should use a protocol that combines multiple hashes of different families.
If "fast" did not matter, we could use generic constructions of most cryptographic primitives and have very strong security claims e.g. "If you can find collisions for this hash function, you can solve the discrete logarithm problem on elliptic curves." There are a few downsides to such an approach but the primary one, the one that kills it in practice, is speed.
I've long since given up trying to give meaningful titles to Hacker News posts since the anonymous silent moderators rewrite them.
The blog post I linked is titled "What the heck is going on with NIST’s cryptographic standard, SHA-3?" and that's the title I went with, saving the trouble of someone rewriting it without notification. But they rewrote it anyway. The title I would have chosen is "NIST's ability to do crypto work compromised by NSA", fwiw.
The author doesn't seem to have all the facts. The Keccak team suggested these changes (Go back to Fosdem, Feb 2013 and one of them has a talk). The whole point was that Keccak won the competition by meeting the spec NIST formulated. Then they proceeded to present how they thought a menu of hash strengths and security guarantees would be structured in a smarter way.
Keccak with a very specific set of parameters is what was submitted to each round of the competition. That was also what was cryptanalyzed by all those folks contributing their time.
I know I wasn't the only one who was a little bit surprised when NIST said "Keccak is SHA-3, now let's see how to standardize a hash function and other applications out of it".
An upper bound on the security of Keccack is set by the expression
'c' represents the internal bandwidth of the hash that is not directly controllable by an attacker.'r' is the rate (bits per expensive f() function invocation) at which the message input can be processed.
Due to some basic and well-understood attacks, we know that Keccak cannot be more secure than c/2 bits.
NIST was planning to assign SHA-3-256 and SHA-3-512 having 128 and 256 bits of security respectively. On one hand, this sounds like a nice conservative overall security rating based on the (lower) collision resistance.
But NIST plans to re-tune these internal constants downwards, setting c to 256 or 512. Whereas the competition final round submission specified c = 448 and 1024 bits. The resulting speed boost is 24% and 89%.
I don't think anyone should presume ill intent here on the part of NIST. But it sure doesn't seem like a good time to propose cutting security margins down to the limit of publicly-known attacks. The idea of a hash function which outputs 256 bits having only 128 bits of preimage resistance is unprecedented.