Hacker News new | past | comments | ask | show | jobs | submit login
NIST Interoperable Randomness Beacons (nist.gov)
65 points by diggan 4 months ago | hide | past | favorite | 29 comments



I remember seeing an earlier version of this, and one interesting thing the 2.0 version adds is a pre-commitment value to each beacon that has the hash of the random value in the next beacon. This allows users to combine random values from multiple beacons without having to worry about malicious beacons choosing outputs based on the outputs of any other beacons, because they have to choose their output before seeing the output value of other beacons. This helps fix one of the major weaknesses of the earlier version where you have to somewhat trust individual beacon sources even if you pick multiple sources.

As an aside, I find cryptographic commitment schemes to be one of the more interesting ideas in cryptography. The idea that you can later prove you had selected a value at a particular time, without revealing anything about that value, is a pretty cool property that you can do some very interesting things with.


P2P gaming should use this too: everyone makes their moves, send a hash of the moves, then send their moves. It eliminates cheating during the final move exchange, where someone could change their moves after seeing other players moves.


Unfortunately this only works for turn based games with discrete states. It's much more difficult to do with realtime continuous movement where you need dead reckoning and fudge the math a little bit.


It's interesting that using a PoW blockchain such as Bitcoin as a random beacon is another way of implementing a pre-commitment. Since blocks are created upon another, and have to meet PoW difficulty requirements, miners are effectively pre-committing to which class of random value is the next beacon, while the economics (and provably difficulty of PoW), makes it expensive for them to manipulate future random values by discarding valid PoW solutions with undesired beacon values. Basically for every bit in the PoW beacon value that you want to fix, you have to throw away half of the potential block solutions even if you have 100% of hashpower: expensive!

Related: another clever technique that can build on any random beacon is to use a long hash chain to actually determine the final beacon value. Zcash actually used this for one of their trusted setups. Basically, they computed SHA256(x) iterated billions of times, a serial computation that took something like a week of solid computing, using a particular Bitcoin block hash as the initial value. There's no way miners could influence this, as there was no way that they could do the serial computation fast enough to know what actual beacon value they were creating.


> ... there was no way that they could do the serial computation fast enough to know what actual beacon value they were creating.

But that also means they can't validate the calculation until long after it's used. For a video game I like this strategy (accept then verify) as you don't have to catch cheaters in the moment. For a system of currency, not so much.


Zcash was using it for a purpose where waiting a week or two to validate the random beacon was fine - it was a one-time calculation required by the mathematics of their trusted setup.

But yes, you are absolutely correct that for many applications that isn't feasible.

Your idea re: online gaming is a good one!


It's interesting that a blockchain of random values with timestamps is created by this, but I fail to see how this helps in most of the applications listed in their applications poster.[1] (warning - PDF)

There needs to be some other time-stamped blockchain that allows the submission of hashes of the results of applying the random data. Otherwise you could retroactively tweak your results, in a manner like phi-hacking.

[1] https://csrc.nist.gov/CSRC/media/Presentations/usages-of-pub...


Let's say you are organizing a sports event. In a knockout tournament it is really advantageous if you could only face weaker teams, until the final game where you face a team which had to fight all the other strong teams instead.

Two months prior to the event you number each initial spot in the bracket, and you assign a number to each team - sort them alphabetically or something, it doesn't really matter. You tell everyone that assignment will be done by shuffling using a well-known PRNG, and you'll take the beacon value one month from the event as seed.

The T minus 1 month point is reached, you download the beacon value, feed it into the PRNG, and shuffle the teams to get the final bracket. The algorithm and all inputs except the randomness was fixed in advance, so you cannot manipulate that. You have committed to the seed without knowing it, so you could not have possibly manipulated the seed either.

Everyone can verify this, so everyone will agree that the drawing was done fairly.


I think you are meant to commit to using the value of the beacon at a future point in time, e.g. when you're preregistering your trial methodology.


Couldn't you also just use the (salted) beacon value itself as the key for a HMAC of the ordered list of the people you assigned?


Does anyone have some examples of use cases for timestamped random entropy values?


The NIST site has a few, but basically it's useful any time you need to randomly select things and prove your selection is really random. The idea is you commit to using a specific future beacon value in a particular way to do your random selection, so you can't game either your random value or the process to get the selection results you want.

One example the paper gives is auditing an election by looking at a random sample of polling sites. Before the election, the government commits to using a specified algorithm to select polling sites to audit, with the random input coming from the first beacon value on the day after the election. This verifies prior to the election that the auditing will be random without revealing what polling sites will be audited (in fact there is no way for anyone to know that prior to the beacon value being published). The government can also use multiple independent beacons. Assuming not all the parties are colluding, this does a pretty good job demonstrating the auditing is random.


There is a decentralized version of this, too:

1. Each party shares an encrypted string whose plaintext starts with, "I'm being honest about my key."

2. The parties share their keys and decrypt everyone else's strings.

3. The strings are concatenated and hashed to produce a number that no party can bias.

This is a formalization of rock-paper-scissors, a protocol for generating fair random numbers that we may have used as kids.


That's true, although like with rock paper scissors, ensuring simultaneous sharing of the keys is hard. So whoever reveals their key "last" knows the outcome before the other participants, which can be bad depending on the application. Adding a random beacon value from a 3rd party to the game (say as an input to the hash in step 3) means all participants can learn the outcome at the same time and that the reveal time is known in advance. Of course now it's the 3rd party that knows in advance, but for many applications you can trust them to be disinterested in the result and you can use multiple independent 3rd parties for better security.

An interesting property here is that with the old NIST beacon version, the beacon operator could actually change the result assuming the other parties in your construct reveal their keys before the beacon value is published (because NIST or whoever could just pick a beacon value that gives a particular result). However the new version includes a hash commitment in the current beacon for the next beacon, so participants could wait for that to be published, then share their keys now that the beacon operator can't change the next beacon value.


I know nothing of encryption- is there significance to each party starting their string with an identical prefix?


That part is to stop them from choosing the decryption key after sharing the encrypted strings with the intended result of guiding the outcome of the hash. If the content of the strings was wholly random a participant would be able to claim that the gibberish resulting from using a key chosen later was what they had meant to encrypt in step one.


Ah, that makes sense. Thanks.



Finally we can set up verifiably fair betting!

$10 that the NIST beacon at 1727710980000 will have more than 15 A's in it!


On the subject of verifiable fair betting. in the early 1900's the mob "numbers game" which in my understanding is a sort of lottery, used low order digits from the closing price of well known stocks to pick their numbers.

People could trust the the numbers were fair because they were generated and published outside the domain of the person running the game, it would be very hard to fix the cent values generated by stock activity and the same numbers were published in all the newspapers.


> WARNING: Do NOT use Beacon generated values as cryptographic secret keys!

As a thought experiment what would happen if you did this?


Someone who knew or guessed that you were doing it could find your key very quickly by trying out every beacon value in the time range your key was generated in


Many exploited systems have effectively done this. A random number generator is seeded with something like "PID + timestamp + milliseconds since system startup." But all these numbers cluster in a small range, so it's practical to test all of them and figure out the seed.


similar things to if you were to use the top row of your keyboard as a password


That is why I continue to use hunter2 as my password.


You use ******* as a password? That’s all on a single line of the keyboard.


FYI I have a project that has been running for a few years now that injects that NIST Randomness Beacons into the Bitcoin chain, along with a few other random beacons: https://github.com/opentimestamps/nist-inject

Bitcoin itself is a good randomness beacon, depending on your requirements for timing. But injecting the NIST beacon into Bitcoin improves the security of various statements you can make about Bitcoin blocks, such as knowing that a particular block was created after a point in time. This is relevant to applications using Bitcoin for timestamping, as it prevents undetected back-dating of Bitcoin blocks.

It's fair to say this is a real world use-case for the NIST Randomness Beacon. Though AFAIK no-one has ever had to actually use it in anger to validate a contested timestamp.


Once upon a time I wrote a Python library that would interact with the NIST randomness beacon: https://github.com/urda/nistbeacon

Might have to go earth that project up and checkout this new version.


I fucking love beacons. It's really cool that NIST actually built one, and the explanations in this thread about the improvements to 2.0 are very illuminating.

My favourite things to think about with beacons are (i) can you make a practical beacon where you don't have to trust an authority (who might be calculating outcomes in advance) and (ii) how would you make a gigabit beacon (this one is << kbit)

This beacon still requires that you trust NIST. This issue is mitigated to some extent by the option to mix outputs from multiple beacons (cool!). But those authorities could still be colluding, and there are existence proofs of seemingly cleaner things -- taking the least significant bit of the NASDAQ, or (my favourite) looking at brightness fluctuations of distant stars (anyone can buy a telescope and a photodiode and get a bit stream). These seem (to me) more decoupled from nefarious meddling.

The starlight thing seems to me to be both trustworthy, tamper-resistant, and easily decentralized, but it is going to be really slow. Why are all the decent beacons we know about seemingly limited to <<kHz? Anyone have ideas for a gigabit beacon?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: