Nuclear engineer here (specialty in core design/simulation). It's fun to see the Monte Carlo method be used in so many other fields now. Even in nuclear, deterministic methods are still orders of magnitude faster for most 'normal' reactor analyses on reactor configurations that are common enough to have all the important deterministic effects known. But with computers so fast, it's quite common for people, especially in conceptual design space, to use Monte Carlo methods since it's a lot easier to believe the answer once you get it.
The code MCNP is still the most common nuclear analysis code, and it's directly descended from these original codes from LANL. [1]
There is also a very powerful research code called OpenMC from ANL that anyone can run on their (powerful) computer. [2]
Hello fellow nuclear engineer, MCNP is also my main workhorse. Very much looking forward to the 6.3 release!
As a cool bit of additional history, check out the FERMIAC. An analog Monte Carlo device for doing neutron transport in two dimensions.
https://en.m.wikipedia.org/wiki/FERMIAC
>But with computers so fast, it's quite common for people, especially in conceptual design space, to use Monte Carlo methods since it's a lot easier to believe the answer once you get it.
Is it really easier to "believe" an answer on a purely stochastical level?
I'm kinda surprised, I would be way more confident (if I were to choose) with answers from deterministic descriptions/equations despite being more abstract and potentially harder to "visualize".
I find more often than not supposed 'comprehensibility' on the surface level to be quite misleading. Of course if one doesn't have clue where to start and enough processing power the Monte Carlo method and alike certainly can help to jumpstart/brute force the process.
MC tends to be more physically accurate (in the limit of a large number of particle histories) because it can simulate radiation transport in continuous space, energy, and angular distribution. Deterministic methods can be faster, but the discretization process is somewhat of a dark art because the underlying distributions can be highly nonlinear and rapidly varying. A combination of methods with varying levels of fidelity are typically used in real nuclear engineering applications, and are always referenced back to a common suite of experimental benchmarks for validation.
Yes, definitely. With deterministic methods you have to make all sorts of approximations to discretize the spatial details of the fuel assemblies, the energy space of the neutrons, the angular directions of the neutrons, and so on. The approximations are complex and sensitive. With monte carlo methods you can treat all those things without approximations. Under the hood, both deterministic and monte carlo nucleonics methods are depending on the same measured/interpolated nuclear interaction probability tables (aka nuclear cross sections).
You can write down an equation for evaluating a ray-traced picture with perfect mathematical precision, you just cannot evaluate the integral for any scene that is non-trivial. MC is the integration technique for it.
Edit: so you know exactly what you get, if you keep it simple - the gotchas start if you try to be clever and use fewer samples (biased MC)
OP's link was a rabbit hole (in a v. good way), sent me down some paper on the LCG random number generator used for MCNP modelling, which somehow led to that.
I didn't realize how integral Monte Carlo sims were to our early advances in nuclear technology. It makes sense to me though- it seems like the Monte Carlo method lets you punch above your weight class in terms of measuring and predicting phenomena that are too complex, or too expensive to deterministically model.
My intuition tells me that it's effectiveness would fall off as the complexity of the in/out relationship scales. Is this true? Or can sufficient sample density overcome arbitrary levels of that type of complexity?
> the Monte Carlo method lets you punch above your weight class in terms of measuring and predicting phenomena that are too complex, or too expensive to deterministically model.
Yes, this is exactly why I like it. At AWS, we've used Monte Carlo simulations quite extensively to model the behavior of complex distributed systems and distributed databases. These are typically systems with complex interactions between many components, each linked by a network with complex behavior of its own. Latency and response time distributions are typically multi-modal, and hard to deal with analytically.
One direction I'm particularly excited by in this niche is converging simulation tools and model checking tools. For example, we could have a tool like P use the same specification for exhaustive model checking, fuzzing invariants, and doing MC (and MCMC) to produce statistical models of things like latency and availability.
It is hard to generalise about the suitability of Monte Carlo methods. In practical applications it is almost always used in hybrid systems, combined with analytical methods and problem specific short cuts. How one should apply Monte Carlo methods to a problem tends to be an open ended question.
I think it's an ad-hoc term. But basically, MC simulations are needed because circuit elements such as transistors, resistors may be mismatched, for example Vth in MOS transistors. This can create input offsets in op-amps, or timing differences in logic. You can run exactly the same simulations as normally (DC operating point, AC transfer function, time-domain with many thousands of points) just over and over with new random parameters according to the distribution estimated in device characterization (I think mostly Gaussian).
Transistors models like BSIM are crazy complicated these days and there's no way to find an analytical solution for all that.
For those interested, the first Monte Carlo program, which ran on the ENIAC, has been found and documented (840 instructions long). It was also the first stored program ever run.
Thank you for these links! I do a lot of scientific computation using MC and MCMC. It’s great to read that history and see the care they invested in their flow diagrams, the versions that got more capable, etc.
Yes, considering this program ran from April 1947, it can almost be considered computer archeology. I find it fascinating they already came up with labels and functions.
The ENIAC is a fascinating machine as well, from its initial modular, parallel, dataflow layout to an actual CPU running code a couple years later.
I wish the Wikipedia article would dive more into the birth of its processor. Interestingly, the French article is the most exhaustive regarding this topic.
Pretty cool author bio from the end of the article:
N. Metropolis received his B.S. (1937) and his
Ph.D. (1941) in physics at the University of Chicago. He arrived in Los Alamos, April 1943, as
a member of the original staff of fifty scientists.
After the war he returned to the faculty of the
University of Chicago as Assistant Professor. He
came back to Los Alamos in 1948 to form the
group that designed and built MANIAC I and II. (He
chose the name MANIAC in the hope of stopping
the rash of such acronyms for machine names, but
may have, instead, only further stimulated such use.)
From 1957 to 1965 he was Professor of Physics
at the University of Chicago and was the founding
Director of its Institute for Computer Research. In
1965 he returned to Los Alamos where he was made
a Laboratory Senior Fellow in 1980. Although he
retired recently, he remains active as a Laboratory
Senior Fellow Emeritus.
I became a big fan of the Monte Carlo method when I watched an economist torture a problem until he found a way to turn it into the heat equation, a differential equation he knew how to solve. He made so many bogus assumptions that the answer was pretty much worthless. But, hey, he could claim he had a closed form solution!
No. It was another bit of social science. I have other problems with Black-Scholes, but they're not as strong. One friend told me that for all of the problems, it offered a relatively neutral way to arb between different strike prices of the same security.
The one thing I love with Monte-Carlo is the way you can use it very simply to give yourself some peace of mind that your probability formula for a finite distribution, derived with sweat and blood using very complicated combinatorics (the kind found in here: https://www.csie.ntu.edu.tw/~r97002/temp/Concrete%20Mathemat...) actually works.
I do remember I used MC to model the static nonlinearity I could expect from high speed digital analog converters for given transistor sizes in their current sources back in the day...
It's also super useful for exploring the spectrum of potential outcomes in financial projections / retirement scenarios. People are sometimes tempted to think in the simpler terms of average rates of return and not fully consider issues like sequence risk. MC can help build better intuitions around the real chances of success given a broader range of
varying market conditions.
It should be noted that modern graphics rendering techniques are a little more accessible and intuitive, while still having the same basic challenges and solutions as the nuclear simulations mentioned in this thread/in the Wikipedia article. Things get even more interesting because quantities are spectral in nature, can be polarized, etc. Wenzel Jakob is doing important work in this area out of EPFL[1].
It says "In the late 1940s, Stanislaw Ulam invented the modern version of the Markov Chain Monte Carlo method", but as far as I know, this is incorrect. He invented a Monte Carlo method, but not a Markov chain Monte Carlo method. Markov chain Monte Carlo is generally attributed to Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller. See https://en.wikipedia.org/wiki/Metropolis-Hastings_algorithm
The article fails even to distinguish simple Monte Carlo based on independently sampled points from Markov chain Monte Carlo. It seems rather confused in other respects too, such as in its discussion of "mean field" methods.
It's a good practice not to link to wikipedia.org when a more in-depth or specific third-party source is available, or if the topic is a well known one (too generic). But that leaves a lot of Wikipedia pages on more obscure topics, and those make fine HN submissions, as long as the topic is of intellectual interest and not particularly correlated with other things. And as long as we don't overdo it.
A while ago, I submitted a raw Wiki link to "Ligne Claire", a drawing style adopted by Hergé of "Adventures of Tintin" fame. That made it to #1 and remained for a few hours. It is not very unusual.
I'm sure that's true, but also there's a lot of very interesting HN-relevant stuff on Wikipedia, and it's perfectly reasonable to share it when you find something pithy.
I've certainly done it and contrariwise have often enjoyed Wikipedia articles (including this one) from other users.
Apropos of which I wish I'd had Wikipedia when I was a kid - I recall being utterly baffled by Brittanica's "explanation" of the term "parsec" and only much later reading a definition that put it in the context of how stellar distances were actually resolved.
Edit: Looking at swibbler's submission history, they're clearly not a karma farmer btw.
The code MCNP is still the most common nuclear analysis code, and it's directly descended from these original codes from LANL. [1]
There is also a very powerful research code called OpenMC from ANL that anyone can run on their (powerful) computer. [2]
[1] https://mcnp.lanl.gov/reference_collection.html
[2] https://docs.openmc.org/en/stable/