Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

lowkey this is why ieee 754 floating point is both a blessing and a curse, like yeah it’s fast n standardized but also introduces unavoidable precision loss, esp w iterative computations where rounding errors stack up in unpredictable ways. ppl act like increasing precision bits solves everything. but u just push the problem further down, still dealing w truncation, cancellation, etc. (and edge cases where numerical stability breaks down.)

… and this is why interval arithmetic and arbitrary precision methods exist, so it gives guaranteed bounds on error instead of just hoping fp rounding doesn’t mess things up too bad. but obv those come w their own overhead: interval methods can be overly conservative, which leads to unnecessary precision loss, and arbitrary precision is computationally expensive, scaling non-linearly w operand size.

wonder if hybrid approaches could be the move, like symbolic preprocessing to maintain exact forms where possible, then constrained numerical evaluation only when necessary. could optimize tradeoffs dynamically. so we’d keep things efficient while minimizing precision loss in critical operations. esp useful in contexts where precision requirements shift in real time. might even be interesting to explore adaptive precision techniques (where computations start at lower precision but refine iteratively based on error estimates).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: