That's not too different from a pretty old observation in the chess world: the presence/absence of an evaluation term is more important than the weighting given to it.
> So I agree with this claim - uniform distributions are fairly robust to errors. But I don't think that's particularly related to randomness - Monte Carlo is only needed to integrate the distribution.
Ah, that's an interesting distinction, thanks. I'll have to think about this some more. But given a situation where exact integration is intractable (like chess or Go), I'm not too sure what the difference really is, because it is those cases (on first thought) where the uniform distribution is useful--if you can see to the end, you don't need to care about bias, right? I mean, "randomness" in the strictest sense is not really necessary; all these programs I speak of used deterministic pseudorandom generators of course. It's really just about ensuring lack of bias given finite sampling. I'm happy to hear your take on it though--you definitely seem to have a lot more knowledge of math/statistics/etc. than I do.
(That does remind me of another fascinating tidbit from the Go world: programmers noticed that using a low-quality PRNG, like libc's LCG rand(), produced significantly weaker players than more evenly-distributed PRNGs, even though it would seem that playing lots of random games of indeterminate length (with the PRNG called at least once per move) would not correlate at all with the PRNG's distribution.)
The adversarial-or-not issue is also good food for thought. I'm not convinced that it explains much in this case, though, since I believe most of these observations were made by playing computer-computer games with each program using very similar algorithms, or with old hand-tuned programs against the newer Monte-Carlo based programs.
But given a situation where exact integration is intractable (like chess or Go), I'm not too sure what the difference really is, because it is those cases (on first thought) where the uniform distribution is useful--if you can see to the end, you don't need to care about bias, right?
Put it this way - suppose I can cook up a deterministic quadrature rule, e.g. quasi monte carlo or an asymptotic expansion. I assert that the quasi monte carlo will work just as well as monte carlo, probably better if convergence is faster.
If I'm right, this is a situation of "yay for uniform distributions". If I'm wrong, it's a "yay randomness" situation. It's nice to know which situation you are in - if I'm wrong, there is no point cooking up better deterministic quadrature rules.
Incidentally, LCG is known to be useless for monte carlo due to significant autocorrelation. So it's quite possible that people using LCG are incorrectly estimating their evaluation term.
Also for me, it's nice to know these things just for theoretical purposes and to enhance my understanding.
> So I agree with this claim - uniform distributions are fairly robust to errors. But I don't think that's particularly related to randomness - Monte Carlo is only needed to integrate the distribution.
Ah, that's an interesting distinction, thanks. I'll have to think about this some more. But given a situation where exact integration is intractable (like chess or Go), I'm not too sure what the difference really is, because it is those cases (on first thought) where the uniform distribution is useful--if you can see to the end, you don't need to care about bias, right? I mean, "randomness" in the strictest sense is not really necessary; all these programs I speak of used deterministic pseudorandom generators of course. It's really just about ensuring lack of bias given finite sampling. I'm happy to hear your take on it though--you definitely seem to have a lot more knowledge of math/statistics/etc. than I do.
(That does remind me of another fascinating tidbit from the Go world: programmers noticed that using a low-quality PRNG, like libc's LCG rand(), produced significantly weaker players than more evenly-distributed PRNGs, even though it would seem that playing lots of random games of indeterminate length (with the PRNG called at least once per move) would not correlate at all with the PRNG's distribution.)
The adversarial-or-not issue is also good food for thought. I'm not convinced that it explains much in this case, though, since I believe most of these observations were made by playing computer-computer games with each program using very similar algorithms, or with old hand-tuned programs against the newer Monte-Carlo based programs.