Hacker News new | past | comments | ask | show | jobs | submit login
Uniting the Linux random-number devices (2022) (lwn.net)
93 points by PaulHoule on Dec 20, 2023 | hide | past | favorite | 41 comments



The commit [1] was eventually reverted [2]

[1] https://github.com/torvalds/linux/commit/6f98a4bfee72c22f50a... [2] https://github.com/torvalds/linux/commit/0313bc278dac7cd9ce8...

> "This causes a large number of qemu boot test failures for various architectures (arm, m68k, microblaze, sparc32, xtensa are the ones I observed). Common denominator is that boot hangs at 'Saving random seed:'" This isn't hugely unexpected - we tried it, it failed, so now we'll revert it.


A bit later another commit [1] was merged that makes reads from /dev/urandom opportunistically initialize the RNG. In practice this has the same result as the reverted commit on non-obsolete architectures, which support the jitter entropy generation.

[1] https://github.com/torvalds/linux/commit/48bff1053c172e6c7f3...


On demand initialization? What a surprise.


`rdtsc`, you have a very appropriate username for the topic


Ha! You’re right.


Thanks for this, I'd been going off the previous articles on this, and hadn't realized it had been reverted.


No problem. Thanks for finding and posting the article! The change was a good idea but like things happen, corner cases intervened.


> The jitter entropy technique relies on differences in timing when running the same code, which requires both a high-resolution CPU cycle counter and a CPU that appears to be nondeterministic (due to caching, instruction reordering, speculation, and so on). There are some architectures that do not provide that, however, so no entropy can be gathered that way. Donenfeld noted that non-Amiga m68k systems, two MIPS models (R6000 and R6000A), and, possibly, RISC-V would be affected;

So, I understand not wanting to break RISC-V support, but _should_ people really care about breaking compatibility with early 1990s MIPS chips and a 1979 Motorola CPU?


In reality, there's no reason there cannot be two implementations, with the correct one automatically selected on build according to architecture.


m68k variants were quite popular in the embedded world until fairly recently.


They will still stay around for a while.

There's also Renesas Rx, an ISA heavily inspired by m68k.

But even Renesas is shifting to RISC-V, sensibly so.


Why shouldn't people expect Linux to run on ancient computers? I guess this is why we have NetBSD.


Mac/PPC user here. The right thing to do is to support the 99% use case best you can, and leave dealing with the quirks of ancient garbage^W^W these wonderful machines of times past to the enthusiasts.

If you want to e.g. make use of modern crypto, the slow CPU is a far bigger issue than sourcing the random numbers.


there are almost certainly new-ish low end ARM chips this would hit as well, it's not really about the specific examples

declaration of bias: I have a non-Amiga m68k that can run linux


Clearly Linus and the maintainers of those architectures care enough not to break them (and those architecture maintainers have enough clout to be taken seriously).

How old does a CPU need to be in order to no longer be supported? The 32-bit x86 architecture is nearly 40 years old - should it be retired for being too old?


> The 32-bit x86 architecture is nearly 40 years old - should it be retired for being too old?

Parts of it already are, from a Linux point of view anyway. It looks like the mainline kernel will drop support for 486 family chips after 6.x, and support for the 386 was dropped in 3.8 (back in 2012).

Some distributions already don't release a version for 32-bit-only intel/amd architectures. It'll be a long time before x86 support is dropped in its entirety, but even today you already have less choice than if you run something not sporting an amd64 or arm compatible architecture.


Yeah, it doesn't make sense to talk about the m68k being a 1979 processor, when Motorola continued to develop it and release new chips all through the 80's, and it continued to be used in new Mac and Next computers up through the early 90's. The 80486 would be its peer as far as obsolescence goes, not the 8086. But like others mentioned even the i486 is about to lose Linux support.


> The 32-bit x86 architecture is nearly 40 years old - should it be retired for being too old?

I’m sure there is some way to gather architectural usage numbers + their likelihood of upgrading, roll it into a value, and if that value drops below X, that architecture gets left behind.

The reason why I mention likelihood of upgrade is that there is a gigantic amount of MIPS routers running Linux, but virtually all of them are stuck on old kernels and very unlikely to be upgraded. I’m sure other architectures have similar snags.


They mostly run old kernels because their vendors do not keep them up to date, and the community does not have champions willing to upstream support for them. In theory it should still be entirely possible to refresh those devices with a modern kernel and user space. Patches would likely be welcomed upstream as well.


The PowerPC architecture is 32 years old, yet it still lives in, among other forms, the RAD750, which powers things like JWST, Perseverance and Curiosity rovers, Juno, LRO, and MRO. Reports say the RAD750 costs about $200K US


> So, I understand not wanting to break RISC-V support, but _should_ people really care about breaking compatibility with early 1990s MIPS chips and a 1979 Motorola CPU?

Yes, we care a lot more than about the 2020 MIPS incarnation (you young kids call it RISCV, fooling nobody)


RISC-V resembles RISC and RISC-2 much more closely than it does MIPS.

MIPS was inspired by RISC, not the other way around.


I'm imagining Kramer, Linux developer: "Jerry... what if we randomly used a different random device each time?!"


Picked at random? But who’s picking the picking algorithm?!?


> Picked at random? But who’s picking the picking algorithm?!?

To give a serious response to this: you can combine the outputs of all of them to pick one of them.


What did Windows and Mac do a decade or two ago? What do they do now? Why do I only ever see Linux struggling with random numbers and booting here?


They solve it by not supporting 99% of the hardware that linux supports.


Even though not all architectures have an RTC or strong PRNG instructions, almost all of them have writable storage in the form of NVRAM/NAND/disk.

Why can’t they just patch up the kernel to persist PRNG state there and require that the boot loader recovers it upon reboot? The kernel would then have high quality random data available as soon as it gets launched.


Windows does that, persists a seed in the registry, but it's not enough.

Imagine a virtual machine snapshot that you start multiple times. The separate runs would have the exact same persisted state.


Doesn’t matter, it just feeds more entropy during boot. Even when it’s the same on all pcs on the planet the output wouldn’t be in a degraded state.

When you are dealing with an attacker which can feed your PRNG non stop with their own entropy you have other issues to deal with.


With that reasoning, one doesn't need an initial randomness file to begin with.

Either you think there's plenty of usable entropy soon enough after power on that values can't be predicted, or you think there isn't and we need to keep a file to initialise from

> an attacker which can feed your PRNG non stop with their own entropy

Is not needed if you know the initial state from the file (which was present in the image file an attacker got their hands on, for example): knowing a sufficient percentage of outputs (such as the tcp seq/ack number (I forget which one the server generates) when you send a decent rate of syn packets) lets you verify which entropy addition would have lead to that output being generated. That's a handful of bits to brute force at each step unless the entropy additions are processed in prohibitively large batches (say, ≥80 bits at a time) which means you start completely predictable until that entropy is accrued (which may or may not be an issue depending on circumstances)


Very useful for certain kinds of testing, though.


The PRNG state is responsible for generating most of the users cryptographic material and is quite important to the system’s security. As such, the PRNG State is a security parameter that should never be exportable nor importable.

There are many new attacks that could surface if import/export were possible; off the top of my head: quiet preloading of an attacker-selected state while the machine is off.


You don't need to export the current running kernel PRNG internal state, all you need is to save an unpredictable seed that can be used to fill the kernel entropy pool after next boot.

https://systemd.io/RANDOM_SEEDS/


> quiet preloading of an attacker-selected state while the machine is off.

That attacker with physical access can chose from multiple other attacks, so why bother making this one more secure. They can sniff keyboard strokes, install malware into the UEFI, etc. pp.


Just do what netbsd recommends and you should be fine https://man.netbsd.org/urandom.4 They explicitly state you should dog feed the prng on every shutdown and boot and also explain why that isn’t an issue.


>quiet preloading

Just preload the OS then.


Does Linux use the RNG inside Intel/AMD CPUs?



If I remember correctly, the TLDR is that RDRAND output is mixed with other "standard" entropy sources and basically making getrandom() non-blocking immediately.


This is from 2022.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: