> "This causes a large number of qemu boot test failures for various architectures (arm, m68k, microblaze, sparc32, xtensa are the ones I observed). Common denominator is that boot hangs at 'Saving random seed:'" This isn't hugely unexpected - we tried it, it failed, so now we'll revert it.
A bit later another commit [1] was merged that makes reads from /dev/urandom opportunistically initialize the RNG. In practice this has the same result as the reverted commit on non-obsolete architectures, which support the jitter entropy generation.
> The jitter entropy technique relies on differences in timing when running the same code, which requires both a high-resolution CPU cycle counter and a CPU that appears to be nondeterministic (due to caching, instruction reordering, speculation, and so on). There are some architectures that do not provide that, however, so no entropy can be gathered that way. Donenfeld noted that non-Amiga m68k systems, two MIPS models (R6000 and R6000A), and, possibly, RISC-V would be affected;
So, I understand not wanting to break RISC-V support, but _should_ people really care about breaking compatibility with early 1990s MIPS chips and a 1979 Motorola CPU?
Mac/PPC user here. The right thing to do is to support the 99% use case best you can, and leave dealing with the quirks of ancient garbage^W^W these wonderful machines of times past to the enthusiasts.
If you want to e.g. make use of modern crypto, the slow CPU is a far bigger issue than sourcing the random numbers.
Clearly Linus and the maintainers of those architectures care enough not to break them (and those architecture maintainers have enough clout to be taken seriously).
How old does a CPU need to be in order to no longer be supported? The 32-bit x86 architecture is nearly 40 years old - should it be retired for being too old?
> The 32-bit x86 architecture is nearly 40 years old - should it be retired for being too old?
Parts of it already are, from a Linux point of view anyway. It looks like the mainline kernel will drop support for 486 family chips after 6.x, and support for the 386 was dropped in 3.8 (back in 2012).
Some distributions already don't release a version for 32-bit-only intel/amd architectures. It'll be a long time before x86 support is dropped in its entirety, but even today you already have less choice than if you run something not sporting an amd64 or arm compatible architecture.
Yeah, it doesn't make sense to talk about the m68k being a 1979 processor, when Motorola continued to develop it and release new chips all through the 80's, and it continued to be used in new Mac and Next computers up through the early 90's. The 80486 would be its peer as far as obsolescence goes, not the 8086. But like others mentioned even the i486 is about to lose Linux support.
> The 32-bit x86 architecture is nearly 40 years old - should it be retired for being too old?
I’m sure there is some way to gather architectural usage numbers + their likelihood of upgrading, roll it into a value, and if that value drops below X, that architecture gets left behind.
The reason why I mention likelihood of upgrade is that there is a gigantic amount of MIPS routers running Linux, but virtually all of them are stuck on old kernels and very unlikely to be upgraded. I’m sure other architectures have similar snags.
They mostly run old kernels because their vendors do not keep them up to date, and the community does not have champions willing to upstream support for them. In theory it should still be entirely possible to refresh those devices with a modern kernel and user space. Patches would likely be welcomed upstream as well.
The PowerPC architecture is 32 years old, yet it still lives in, among other forms, the RAD750, which powers things like JWST, Perseverance and Curiosity rovers, Juno, LRO, and MRO. Reports say the RAD750 costs about $200K US
> So, I understand not wanting to break RISC-V support, but _should_ people really care about breaking compatibility with early 1990s MIPS chips and a 1979 Motorola CPU?
Yes, we care a lot more than about the 2020 MIPS incarnation (you young kids call it RISCV, fooling nobody)
Even though not all architectures have an RTC or strong PRNG instructions, almost all of them have writable storage in the form of NVRAM/NAND/disk.
Why can’t they just patch up the kernel to persist PRNG state there and require that the boot loader recovers it upon reboot? The kernel would then have high quality random data available as soon as it gets launched.
With that reasoning, one doesn't need an initial randomness file to begin with.
Either you think there's plenty of usable entropy soon enough after power on that values can't be predicted, or you think there isn't and we need to keep a file to initialise from
> an attacker which can feed your PRNG non stop with their own entropy
Is not needed if you know the initial state from the file (which was present in the image file an attacker got their hands on, for example): knowing a sufficient percentage of outputs (such as the tcp seq/ack number (I forget which one the server generates) when you send a decent rate of syn packets) lets you verify which entropy addition would have lead to that output being generated. That's a handful of bits to brute force at each step unless the entropy additions are processed in prohibitively large batches (say, ≥80 bits at a time) which means you start completely predictable until that entropy is accrued (which may or may not be an issue depending on circumstances)
The PRNG state is responsible for generating most of the users cryptographic material and is quite important to the system’s security. As such, the PRNG State is a security parameter that should never be exportable nor importable.
There are many new attacks that could surface if import/export were possible; off the top of my head: quiet preloading of an attacker-selected state while the machine is off.
You don't need to export the current running kernel PRNG internal state, all you need is to save an unpredictable seed that can be used to fill the kernel entropy pool after next boot.
> quiet preloading of an attacker-selected state while the machine is off.
That attacker with physical access can chose from multiple other attacks, so why bother making this one more secure. They can sniff keyboard strokes, install malware into the UEFI, etc. pp.
Just do what netbsd recommends and you should be fine https://man.netbsd.org/urandom.4
They explicitly state you should dog feed the prng on every shutdown and boot and also explain why that isn’t an issue.
If I remember correctly, the TLDR is that RDRAND output is mixed with other "standard" entropy sources and basically making getrandom() non-blocking immediately.
[1] https://github.com/torvalds/linux/commit/6f98a4bfee72c22f50a... [2] https://github.com/torvalds/linux/commit/0313bc278dac7cd9ce8...
> "This causes a large number of qemu boot test failures for various architectures (arm, m68k, microblaze, sparc32, xtensa are the ones I observed). Common denominator is that boot hangs at 'Saving random seed:'" This isn't hugely unexpected - we tried it, it failed, so now we'll revert it.