As far as I can tell, there's little to nothing objectionable about this change; it makes urandom behave _more like_ random, by not yielding bytes before the kernel's entropy pool is in a good state (GRND_INSECURE).
Systems where this would make urandom block for an objectionably long time (because CPU execution time jitter is unavailable or is believed to have low entropy) are largely hypothetical.
I think you can still have specific reservations about CPU execution time jitter, though my experience and reading makes me believe this is probably a pretty good source of entropy; personally, I feel the ball is firmly in the court of jitter skeptics to show why the entropy measures from actual running systems are wildly high estimates. [https://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.html#toc-A...]
I also think you can still have specific reservations about how the kernel 'shepherds' its pool of random bits. I honestly am out of touch with the latest algorithms, both in research and in the Linux kernel. It would seem best if the kernel used a cryptographic algorithm where inferring the hidden random pool's state from outputs implies a useful attack on the cryptographic algorithm itself (i.e., has a proof of security). I don't think that Linux does this at the moment, based on recent discussion at https://lwn.net/ml/linux-kernel/20220201161342.154666-1-Jaso...
But let's take a moment to happily reflect: for applications, running well after system boot-up has completed, it is now soooo easy to have your fill of cryptographic-quality random numbers than it was in the bad old days.
> I think you can still have specific reservations about CPU execution time jitter, though my experience [...]
Just want to point out that the Linus Jitter Dance is already in use today. It's been there for three years. I had nothing to do with that change. The change that I'm now proposing, which this article is about, changes nothing about the Linus Jitter Dance. Whether you like it or not, it's being used already, and has been for three years now, affecting all interfaces to the rng.
I only mention it in my patch, for the sole purpose of indicating that blocking in /dev/urandom has been unproblematic for three years now, because it will unblock a second later. That's the only at all reason why I mention the Linus Jitter Dance.
The only purpose of the patch is to make /dev/urandom block.
> I also think you can still have specific reservations about how the kernel 'shepherds' its pool of random bits. [...] It would seem best if the kernel used a cryptographic algorithm
Systems where this would make urandom block for an objectionably long time (because CPU execution time jitter is unavailable or is believed to have low entropy) are largely hypothetical.
I think you can still have specific reservations about CPU execution time jitter, though my experience and reading makes me believe this is probably a pretty good source of entropy; personally, I feel the ball is firmly in the court of jitter skeptics to show why the entropy measures from actual running systems are wildly high estimates. [https://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.html#toc-A...]
I also think you can still have specific reservations about how the kernel 'shepherds' its pool of random bits. I honestly am out of touch with the latest algorithms, both in research and in the Linux kernel. It would seem best if the kernel used a cryptographic algorithm where inferring the hidden random pool's state from outputs implies a useful attack on the cryptographic algorithm itself (i.e., has a proof of security). I don't think that Linux does this at the moment, based on recent discussion at https://lwn.net/ml/linux-kernel/20220201161342.154666-1-Jaso...
But let's take a moment to happily reflect: for applications, running well after system boot-up has completed, it is now soooo easy to have your fill of cryptographic-quality random numbers than it was in the bad old days.