Hacker News new | past | comments | ask | show | jobs | submit login

The directly translated ruby version (from stack overflow of course) is even shorter:

    def fib(n, cache=Hash.new{ |h,k| h[k] = k < 2 ? k : h[k-1] + h[k-2] })
      cache[n]
    end
It runs out of stack around 7146 on my machine at least. The python one is limited by the recursion depth limit in my test but of course that's configurable at your own risk.





The Python version and this Ruby version are not equivalent.

In Ruby, default parameters are allocated for each call to the function, which means that they are not shared across calls.

In Python, default parameters are allocated once and shared across all calls, allowing them to be mutated.

This becomes obvious if we change the Python and Ruby versions to print when they're computing something that is not yet cached. For back-to-back calls to `fib(10)`, the Python version prints only on the first call to `fib(10)`. The Ruby version must recompute all values from 2 – 10 again.


If you're first going to golf it, endless-def:

    def fib(n, cache=Hash.new{ |h,k| h[k] = k < 2 ? k : h[k-1] + h[k-2] }) = cache[n]

It's actually kind of ungolfed. The default version would be just

    fib = Hash.new{ |h,k| h[k] = k < 2 ? k : h[k-1] + h[k-2] }
    
    fib[7145]

This is the proper Ruby form since it relies on the [] operator and is therefore substitutable for any other callable esp. procs/lambdas, and able to wrap them as well.

This isn’t just golf, it’s an excellent way to pass around a lazily computed, self-caching closure. I use it often in preference to memoization gems.

Aside from noting the false/nil equivalence concern, the original article seems over-engineered to me. Excessive metaprogramming is a common trap for Ruby devs.


IMHO, the mamul form is even better for golf, F(n) in O(n log n)

It's definitely something I remember being interested to find out about the hard way.

Every recursion class talks about the Fib algorithm and why it's nice with recursion. But the iterative version doesn't have these stack limitations, presumably uses less memory and is just as fast. Doesn't that make it a better implementation?


> Every recursion class talks about the Fib algorithm and why it's nice with recursion.

It is nice with recursion in the sense that it's a straightforward, easy algorithm (what college CS student doesn't know basic arithmetic?) that illustrates recursion. It's also trivial to write the recursive version based on the mathematical definition.

It is not nice in that it's very slow and blows up due to the exponential growth of the recursive calls. No one outside of CS 101 or an algorithms class is going to ever write the recursive version again. But that not-nice aspect makes it nice again, because it's a good jumping off point in how to improve an algorithm's runtime by understanding its structure (the iterative version) or throwing more memory at it (the memoized version).

> But the iterative version doesn't have these stack limitations, presumably uses less memory and is just as fast.

There is no "presumably", unless you store every Fibonacci number and not just the last two, the iterative Fibonacci does use less memory than the naive recursive Fibonacci. And there's no "just as fast". The iterative Fibonacci is faster, unless you've done something horribly wrong. It runs in linear time wrt N rather than exponential. If your linear algorithm is only just as fast as an exponential one, you haven't made a linear algorithm.

> Doesn't that make it a better implementation?

Yes, obviously. Which is why after CS 101 or an algorithms class you pretty much never write anything like that except maybe as a prototype. "This recursive program follows the definition closely, but it blows up memory/time." Start there and improve is a very reasonable thing. Start there and stop is just silly.


The stack is just an implementation detail of some languages.

Some other languages have non-broken function calls, and don't need to grow the stack for all of them.

In those languages, you don't need to have loops as built-in to work around broken function calls. You can just write your own loops as library functions.

(Of course, the naive recursive definition of Fibonacci would still be slow. You'd want to write a version that can be efficiently evaluated. You are already familiar with the linear time version in the iterative form.

You can also write a logarithmic time version of Fibonacci.)


This is fun. Just wanted to say that both my 64 gb desktop and my 16 gb of RAM laptop reach the same stack limit at exactly 7692 when using pry. And reached the limit at 7694 with irb.

RUBY_THREAD_VM_STACK_SIZE for Ruby



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: