Hacker News new | past | comments | ask | show | jobs | submit login

I was actually looking into Chebyshev polynomials and minimax optimization to make a fast vectorized implementation of the logarithm function a few days ago. In the process of looking around the internet for information I ran into this: https://github.com/pkhuong/polynomial-approximation-catalogu... (also see link to blog at bottom)

He has nice sets of coefficients derived by taking into account the discrete values of floats/doubles, and the performance gain by setting some coefficients to 0, 1 or 2.

The only thing he did not account for is precision loss when multiplying and adding values when actually calculating the polynomial, which unfortunately results in a large relative error around log(1). Eigen's vectorized implementation does some weird tricks with changing the range to sqrt(0.5) to sqrt(2), and does not exhibit this large relatively error. The unvectorized MSVC implementation does not exhibit this either.

Does anyone how Microsoft and Intel and the like derive the coefficients for their approximations of the elementary functions in math.h?




The reference collection of polynomial approximations is: Approximations for digital computers by Cecil Hastings Jr. The method to derive the polynomials is described there as well as coefficients for many common functions. It isn't as complicated as it seems. Intel has probably automated the procedure.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: