Hacker News new | past | comments | ask | show | jobs | submit login

> If the kernel aims to have 2 ways to get randomness [...] and there's two device files why not map one to one and one to the other?

Because both these files come with preconceived notions from various stages in the life of Unix regarding what guarantees they provide, and "best effort" works for neither of those.

If this change had come in, say, 2005, they maybe could have gotten away with /dev/random = blocked until initialized and /dev/urandom = best effort, since that was the common wisdom at the time. But for the last 10 years, more and more people have switched to using /dev/urandom for everything since it is actually good enough for everything once initialized (and most application devs only care about the "once initialized" phase since they don't work on early-boot stuff). Switching /dev/urandom to GRND_INSECURE now would therefore be a potentially bad idea.




But saying /dev/urandom is best effort doesn't change those expectations, in fact it keeps those expectations the same. Saying most apps, "don't work on early-boot stuff" so aren't affected doesn't mean we should risk breaking systems who will inevitably have software that is going to run during early boot.

It just feels like the argument for this change is, this is irrelevant for 99.99% of applications, so who cares? The 0.01% care!

EDIT:

> Switching /dev/urandom to GRND_INSECURE now would therefore be a potentially bad idea

And, again, maybe I'm misunderstanding. The Jason Donenfeld email seems to say this is effectively the behavior we have. Ie, no guarantees of "initialization" or "sufficient entropy" on the urandom device.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: