Hacker News new | past | comments | ask | show | jobs | submit login

This is a very naive approach for a number of reasons:

1). You're only sampling a tiny fraction of the space of possibilities. The example using two dice having 36 outcomes makes it seem like you're in a very well behaved world. But the total number of possible samples is n_possible_values ^ n_dice, so with three dice you have 216 possible outcomes, with 4 dice 1296, with 10 60466176. The growth is exponential in the number of variables, with predictable results.

2). The distribution of delivery times is not normal, experience shows it's much closer to a power law with a small number of tasks exploding beyond any reasonable expectations.

3). No critical path. A task can't be completed before it's critical path is completed. Since you allow subtasks to vary your critical path needs to be calculated for each run. With 1). and 2). it basically means that you have no idea if you've gotten a good representation of the sample space of critical paths.

If you run a simulation that samples a tiny fraction of your probability space you have no idea what monsters lurk in the background when the problem space is one prone to explosions.

In short: beware of tap dancing in minefields when blindfolded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: