Hacker News new | past | comments | ask | show | jobs | submit login

That basically uses dynamic programming, which is "cheating". (It is a linear time algorithm instead of exponential!) Different compilers will almost always just give us constant factor improvements. Smarter algorithms will always beat that, but sometimes a smart algorithm with a slow compiler still isn't fast enough.



I don't think the matrix algorithm is linear, in other words I don't think fib(40) takes exactly twice as long to compute as fib(20). I think it's faster than linear, because it converts the multiplications to exponentiation and then takes advantage of composeability such that fib(40) should require on the order of one more operation than fib(20).

I think that means it's order log 2 n, but perhaps you HN folks can correct my poor math skills.


The site is down now, but from looking at wikipedia, you are probably right. (Sorry, I just glanced at the way they were using the matrix before, and assumed they did n matrix multiplies to calculate fib(n)) I think you are also right about the complexity, though people would usually just write log(n) since log(a*b)=log(a)+log(b).


Sorry, I didn't mean log (2n), I meant (log sub 2) n. Or in other words, the computation times is proportional to the number of binary bits in n.

The algorithm is remarkably simple. Given that fib(40) can be extracted from:

           40
    [ 1 1 ]
    [ 1 0 ]
The simplest implementation, as you surmise, is to do 40 matrix multiplications. I don't think that part is dynamic. the dynamic part comes in observing that n^40 = (n^20)^2. This can be followed downward recursively: n^20 = (n^10)^2, n^10 = (n^5)^2, n^5 = (n^2)^2 * n and n^2 = n * n.

So... The actual matrix multiplication is much slower than a simple bignum multiplication, but there are only six such operations to perform.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: