Hacker News new | past | comments | ask | show | jobs | submit login

Huh, I did my back of the envelope wrong when I first decided that wasn't plausible.

Here is a better back of the envelope.

Suppose that we're using a random number generator that has 4 billion possible internal states. (We're using a C integer under the hood.) Suppose that we're having it generate random 256 bit integers, which we then test for primality. The probability of these integers being prime is about 1/log(2256), or about 1/177. So we have around 2.4 million possible prime numbers we can generate from this generator.

A key has 2 of these primes, therefore 2 random keys have 4 possible combinations of their primes that could match, for about 1 chance in 600k of matching. Given n keys, there are roughly n*n/2 possible pairs of keys which might match. It turns out that if you have about 570,000 keys in your overall pool which was made by the same bad algorithm, you'd expect to have roughly 27,000 possible matches.

This is scary. What is even more scary is that if you can identify the bad algorithm in common in this pool of poor choices, it is quite doable to enumerate all possible primes that could be a key. And then rather than just being able to compromise 0.2% of the published keys, you can relatively easily compromise about 4% of them instead.

If they haven't already done so, I would expect interested parties to identify the bad algorithms in widespread use, and enumerate likely to be used primes. The NSA is obvious, but criminal parties with access to a botnet can also perform this operation.




This would be an interesting time to work at Google, because one assumes that an interested researcher there could get his hands on a Very Big Pile of Keys fairly quickly, and this seems to be made for map/reduce.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: