Due to a misplaced parenthesis, if insufficient GOOD
bits were available to satisfy a request, the
keying/rekeying code requested either 32 or 64 ANY bits,
rather than the balance of bits required to key the
stream generator.
I think this paragraph is a nice reminder, how hard crypto can be.
A misplaced parenthesis can corrupt output data from an ordinary programs too. But with crypto, severe problems have a much easier time staying silent through QA and interop testing, and widespread usage.
I think it's because the concept of "cryptographically secure" is essentially trying to prove a negative. That's hard enough in general, but especially hard to do about an intelligent adversary whom you may not even know anything about. You're trying to prove that no present or future attacker will be able to obtain any information which can allow him to unravel your secrets.
Crypto is about building sky castles full of really really long secrets floating on foundations of really small ones, and then tossing them all up in the air to yourself as you run down the street backwards with rabid weasels chasing you.
Best paper Usenix Security '12.
Nadia Heninger, UC San Diego; Zakir Durumeric, University of Michigan; Eric Wustrow and J. Alex Halderman, University of Michigan
https://www.usenix.org/conference/usenixsecurity12/mining-yo...
"We find that 0.75% of TLS certificates share
keys due to insufficient entropy during key generation,
and we suspect that another 1.70% come from the same
faulty implementations and may be susceptible to compromise. Even more alarmingly, we are able to obtain
RSA private keys for 0.50% of TLS hosts and 0.03% of
SSH hosts, because their public keys shared nontrivial
common factors due to entropy problems, and DSA private keys for 1.03% of SSH hosts, because of insufficient
signature randomness."
Ron was wrong, Whit is right
Arjen K. Lenstra and James P. Hughes and Maxime Augier and Joppe W. Bos and Thorsten Kleinjung and Christophe Wachter
http://eprint.iacr.org/2012/064
To be clear, the PS3 problem was not a problem of randomness quality or a PRNG backdoor. It was illegal nonce reuse in the zero-knowledge proof-of-key-possession protocol embedded in DSA signatures.
Sorry 'illegal' in the sense of being contrary to the requirements of the crypto protocol, in the same way that you can have the concept of an 'illegal operation' in a CPU ISA.
The advisory says that reading from /dev/random is fine, but reading from /dev/urandom is affected. Shouldn't cryptographic applications be using /dev/random to begin with? I was under the impression that /dev/urandom is only for when low-quality randomness is acceptable.
/dev/urandom is supposed to provide cryptographic quality randomness. /dev/random does provide "better" randomness, in a sense, but may be blocking - it is typically used for the generation of long-term cryptographic keys (like GPG keys).
The only time that ought to make a difference in a modern properly-implemented CSPRNG is right after system startup when very little unpredictability has made it into the pool.
Consequently, system boot scripts are just about the worst possible place to generate new keys if they didn't already exist.
From the manpage of /dev/random
"If a seed file is saved across reboots as recommended below (all major Linux distributions have done this since 2000 at least), the output is cryptographically secure against attackers without local root access as soon as it is reloaded in the boot sequence, and perfectly adequate for network encryption session keys. Since reads from /dev/random may block, users will usually want to open it in nonblocking mode (or perform a read with timeout), and provide some sort of user notification if the desired entropy is not immediately available."
Only if the seed file was was generated by a secure kernel during a clean shutdown and kept secret during that whole time. There's a lot to go wrong there, especially if you're an embedded system running on flash.
I also seem to recall host keys being generated on first boot. Perhaps some installers are smart enough to prime that seed file.
Then what are the requirements that make this necessary? Given security in layers tends to be the ideal, surely the dependence on generating at boot time could be considered part of the bug?
The basic requirement for a CSPRNG is that no one will ever be able to guess the values of a prior or future set of 200 bits with greater than a 1 in 2^200 chance of success (or pwning your kernel). It's not just a super-smart attacker you worry about too, sometimes all the systems on the internet collaborate accidentally to unmask each others' weak keys (see Heninger et al. linked in the thread).
Boot up naturally involves starting network services and daemons which need their keys, right? It's a reasonable thing to want to do. The hard thing here is if we say "you can't have any output from the system CSPRNG until we're totally sure it's fully preheated", we end up with a few inevitable situations.
Blocking read: "BUG: System hangs noticeably on boot" "If I change /dev/random to /dev/urandom my system boots 5 seconds faster! Woot!" and "So how long do I have to wait then? Why does boot take longer on some networks than others?"
Nonblocking: If the kernel returns 0 bytes of data from read(), some apps will just continue on processing with their uninitialized or zeroed buffer.
It's not so much that gathering entropy is hard, it's that making an accurate estimate of the entropy you gathered is hard (it's basically writing embedded code which attempts to estimate the capabilities of some unknown future black swan). Getting a platform full of developers to write code that correctly handles an error condition that never appears on their own developer workstation seems basically impossible.
I know this isn't the (main) take away message, but it does make me feel a little better about some errors I've made in the past to know that even heavily vetted code can have these types of errors.
I don't know if this code would qualify as "heavily vetted." Thor Lancelot Simon wrote it himself and imported it himself into the tree for NetBSD v6 and onwards. Probably hasn't seen that many eyes for review.
Fix a security issue: when we are reseeding a PRNG seeded early in boot
before we had ever had any entropy, if something else has consumed the
entropy that triggered the immediate reseed, we can reseed with as little
as sizeof(int) bytes of entropy.
-Wsizeof-pointer-memaccess (Clang and GCC 4.8) is the only sizeof-related warning I know of, which warns about some cases where both pointer and size are immediately passed to a handful of builtin functions (in GCC, some among mem…, str…, s…printf). C's typing is much too weak for -Wsizeof-pointer-memaccess to be generally useful.
Without getting into typing, a simple
warning: arithmetic inside sizeof
would catch this. I don't think it would catch much noise, I don't see any useful uses of pointer arithmetic immediately inside a sizeof. It's a bit specific though, are there more functions that would benefit as well?
Well... sizeof of an expression is mostly useful for not repeating yourself in cases when you need the size of a type you already have some way of referencing, as in
a = malloc(sizeof(*a));
It's also useful for various hacks (stuff like compile-time assert macros), since it doesn't evaluate the expression, only its type.
Sure, I can think of plenty of ways it could arise via metaprogramming. I don't know if I would want such a warning enabled in my own template-heavy C++ code.
But it perhaps it could be useful for those following the NetBSD style. :-)
ERROR: LINE 3, COLUMN 26
ERROR: UNMATCHED RIGHT PARENTHESIS ENCOUNTERED.
ERROR: WHILE PROCESSING INPUT:
1 2 3 4 5 6 7
123456789012345678901234567890123456789012345678901234567890123456789012
Sorry, couldn't resist ; ) (big fan of Clojure and elisp here btw)
^
|
ERROR: PROCESSING WILL CONTINUE WITH THE NEXT AVAILABLE INPUT TOKEN.
ERROR: ALL YOUR BASE ARE BELONG TO US.