Using node IDs and timestamps is just an additional safety factor. Statistically it's not necessary if you have a good generator. Even without those things our target collision probability is less than the expected uncorrectable error bit rate of a HDD.
Good PRNGs have equivalent entropy at each bit. With a good PRNG (even a non-CS PRNG) you shouldn't need to mix entropy to do scaling. You should still do rejection sampling[1] if you care about bias. It looks like a good scaling method might be added to the ECMA spec as part of the standard library thanks to some awesome people at Google.[2]
"Using node IDs and timestamps is just an additional safety factor. Statistically it's not necessary if you have a good generator."
Sorry, that's just not sensible. You always want to add node IDs and timestamps (provided you hash the final output so as not to leak details about your system) in case your generator fails. Why would you not want another layer of safety? It also helps protect against the case where an attacker might gain something by being able to predict the next ID in the sequence.
If the attacker might gain something by being able to predict the next ID in the sequence then you should be using a CSPRNG. That's not a problem here. There's nothing for them to gain.
It absolutely is sensible. Adding node IDs and timestamps leaks information. If you add a hash function now you have two problems -- the hash of a random value is actually a _new_ random value with entirely different characteristics. You're falling into another trap. Which is why you might not want another layer of safety -- you're introducing another layer of complexity and another place to fuck up. You had good intentions, but in the scenario you described you've just introduced an additional point of failure with limited upside. Why wouldn't you do the math and implement the simpler solution using a generator that won't fail?
As I've said elsewhere, the likelihood of collision with our identifiers is lower than the uncorrectable error bit rate of a HDD. In other words, it's more likely for a perfect deterministic method to generate a collision because there was a hardware failure persisting it to disk. Or, more pragmatically, the risk is far below the level that any sensible person should ever be worried about.
It seems your original function made the same assumption of V8's Math.random. All PRNGs fail at some point, even CSPRNGs. You may as well write your code accordingly, with less optimistic assumptions.
If you're not going to be adding layers of safety, and if you're going to keep insisting that your PRNG "won't fail" then I guess it's only a matter of time before you will have to repeat the same mistake.
As other commenters have pointed out, you should have written your function in such a way that it does not place a critical reliance on any single component.
I generally trust peer reviewed formal mathematical proofs that show something won't "fail" in a particular, relevant, way. If you don't then you probably shouldn't be on a computer. The code I'm relying on makes the same sorts of assumptions that keep your data secure. It is inconsistent to trust it in one place but not in another.
I don't see the need for belt-and-suspenders here and there are legit reasons not to add host/time to an identifier. That's why we have UUID1 and UUID4.
Good PRNGs have equivalent entropy at each bit. With a good PRNG (even a non-CS PRNG) you shouldn't need to mix entropy to do scaling. You should still do rejection sampling[1] if you care about bias. It looks like a good scaling method might be added to the ECMA spec as part of the standard library thanks to some awesome people at Google.[2]
[1] https://gist.github.com/mmalone/d710793137ed0d6b8cb4
[2] https://twitter.com/mjmalone/status/667806963976134656