Hacker News new | past | comments | ask | show | jobs | submit login

I tried 4 times, but on my machine (Firefox on Linux) I get the exact same result every time, up to the last pixel.

In theory you would expect a physics engine to always behave exactly the same every time (as long as there are no random forces in the simulation, obviously). In practice, I can think of some ways that would influence the result. For example, x86 can use extended (80-bit) precision for intermediate floating point calculations internally, but will always store them as 64 bit in registers or memory. Depending on the timing of thread context switches, you can imagine subtle roundoff errors caused by going from extended precision to double precision. When using SSE math you get similar effects but even more pronounced, because SSE is not always fully IEEE compliant.

Usually the introduced error is infinitely small and most algorithms are robust enough to cancel it out. Stuff like non-linear regression or a physics simulation are notable exceptions, because the calculations they do are iterative and progressive, meaning any introduced error can propagate and get amplified along the way. I've seen deterministic non-linear fit algorithms go in completely different directions for unstable problems on different machines, just because of CPU differences (64-bit vs 32-bit, x86 vs. sparc, etc).




Great explanation, thank you. Always fun to know about non-determinism at the hardware level.


>> Always fun to know about non-determinism at the hardware level.

Strictly speaking, the non-determinism isn't actually in the hardware, because the OS schedules thread context switches ;-)

Assuming the CPU is running a completely single-threaded program or uses a deterministic way to schedule thread context switches, the result should always be perfectly identical.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: