Hacker News new | past | comments | ask | show | jobs | submit login

The Fibonacci implementation here:

http://github.com/raganwald/homoiconic/blob/master/2008-12-1...

Does a pretty quick job using plain-old-old-ruby (POOR Implementation):

    time ruby /Users/raganwald/Desktop/matrix_fib.rb   102334155

    real    0m0.012s
    user    0m0.007s
    sys     0m0.004s
That doesn't mean we shouldn't welcome implementation improvements, but in my lifetime the quest for a "sufficiently smart compiler" that will do away with thinking programmers has enjoyed only modest success.

Append: This is not a criticism of Charles' amazing work dragging Ruby into the 1980s and designing Ruby-like languages for the JVM. If asked to choose between algorithm improvements and implementation improvements, I answer "Yes!"




Neat hack, but needless, and besides that, irrelevant: the author didn't really care about the fibonacci sequence. He cared about function calls and additions.

See Performance and Evaluation of Lisp Systems, http://www.dreamsongs.com/Files/Timrep.pdf, for more discussions of the nature of benchmarks.


Thank you, I am aware that the author was deliberately picking a "most pessimum" algorithm as a benchmark. I am not commenting on the nature of benchmarks, but rather on the relative importance of algorithms and implementations.

You may find it irrelevant to the author's point, others may find it an interesting byway to follow in a larger conversation. Or not.


OK, but the implementation layer is just another layer of algorithms, except that you have somewhat less control over it. I notice your code uses inject, which is coded in C rather than Ruby -- as are many parts of POOR where C offers better performance.

The point of Duby/Juby is to give us _more_ control over these algorithms by dragging them out of the base implementation language and into Ruby. Why should inject be written in C or Java?


> Why should inject be written in C or Java?

I agree with your point and bless you with a mighty upmod. One of my peeves about inject being written in C is that there is very little pressure to make Ruby fast enough that inject could be written in Ruby. So who cares? Well, if you want to make your own language feature that is like inject but not inject, your feature will be 100x slower.

So making Ruby fast enough that Ruby can be implemented in Ruby is a huge win for metaprogrammers :-)

> Now it is not correct to say that Matz does not use Ruby. He does, and so he does eat his own dog food. And for the domains where he uses Ruby, he has optimized Ruby to be a useful tool. But since he doesn’t use Ruby to build Ruby, he does not have the same incentive to tune Ruby for the purpose of building languages.

http://weblog.raganwald.com/2006/11/significance-of-meta-cir...


Smalltalk Agents had no "primitives." (Read, no methods written in C.) The JIT was deemed good enough that everything could be written in Smalltalk. VisualWorks still has primitives, but a surprising number of things are written in Smalltalk, like Dictionary lookup.


Heh, sure. I would note however that implementation speed conditions the kinds of algorithms you write. You can't really do realtime audio processing in Ruby -- so implementation performance limits the scope of your project.

I know, because I hack on Guile ;-) Fortunately in this case we come in at about 20 seconds. But there's a long way to go. I'm tired of the "slow language" question, and of having to drop down to C to do even modest number crunching.

And by "drop down to C", I really mean anything that's not written in the HLL (Scheme in my case) -- I'm less and less interested even by C libraries providing Scheme functions.


That basically uses dynamic programming, which is "cheating". (It is a linear time algorithm instead of exponential!) Different compilers will almost always just give us constant factor improvements. Smarter algorithms will always beat that, but sometimes a smart algorithm with a slow compiler still isn't fast enough.


I don't think the matrix algorithm is linear, in other words I don't think fib(40) takes exactly twice as long to compute as fib(20). I think it's faster than linear, because it converts the multiplications to exponentiation and then takes advantage of composeability such that fib(40) should require on the order of one more operation than fib(20).

I think that means it's order log 2 n, but perhaps you HN folks can correct my poor math skills.


The site is down now, but from looking at wikipedia, you are probably right. (Sorry, I just glanced at the way they were using the matrix before, and assumed they did n matrix multiplies to calculate fib(n)) I think you are also right about the complexity, though people would usually just write log(n) since log(a*b)=log(a)+log(b).


Sorry, I didn't mean log (2n), I meant (log sub 2) n. Or in other words, the computation times is proportional to the number of binary bits in n.

The algorithm is remarkably simple. Given that fib(40) can be extracted from:

           40
    [ 1 1 ]
    [ 1 0 ]
The simplest implementation, as you surmise, is to do 40 matrix multiplications. I don't think that part is dynamic. the dynamic part comes in observing that n^40 = (n^20)^2. This can be followed downward recursively: n^20 = (n^10)^2, n^10 = (n^5)^2, n^5 = (n^2)^2 * n and n^2 = n * n.

So... The actual matrix multiplication is much slower than a simple bignum multiplication, but there are only six such operations to perform.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: