Hacker News new | past | comments | ask | show | jobs | submit login
Ruby 1.9, massive boost in threading preformance (tonyspencer.com)
30 points by ashleyw on July 29, 2008 | hide | past | favorite | 17 comments



Dug into this a bit more did a quick writeup here:

http://www.skitoy.com/p/python-vs-ruby-performance/172

Bottom line is well over 70% of time is going to rand() and 25% to list overhead, threading is drowned out by that noise.


In 1.9 Ruby is also adding "Fibers" and built in Actor patterns (i.e., Erlang-like concurrency)- more exciting IMO. http://www.infoq.com/articles/actors-rubinius-interview


Does Ruby 1.9 have a web page or something? Ruby's site has very little information on how's the project going, and I am not too involved to follow the mailing lists.

Questions like: when Rails support is coming? Or will native threads be supported?


With Ruby threads you get the worst of both worlds. They are preemptive and user-level.

1.9 will introduce native threads, which aren't much better. Each native thread requires megabytes of memory for its stack. Co-routines require only 64k of memory.

Concurrency in Rubinius should require even less memory overhead as it is stackless.


1.9 will introduce native threads, which aren't much better.

What? They're not only "better", they're actually _threads_, i.e. are able to run in parallel, you know? What are Ruby 1.8 threads good for, except for sitting on sockets?


Producer/consumer where there are multiple I/O bound producers (examples that come to mind: RSS reader, web spider, multiple-file search). They can also be a useful abstraction for things like waiting for events from multiple sources, or running quasi-realtime simulations.

I agree, though, that real threads would be a significant improvement. Or better yet, MxN threads like GHC.


I'm not sure how using 20 threads tests threading performance, but there's probably some issue that Ruby used to have that I'm not aware of.

It is impressive though, that to beat the given Ruby time on my machine in SBCL, I actually had to add optimize declarations. Of course, I have no idea what kind of machine the published number were from, and I'm too lazy to try and install Ruby 1.9 myself, but it seems that the old 'Ruby use -> slowness' implication no longer holds.

[edit] Hold on, the Ruby 1.8 test, which takes 22 seconds in their figures, takes 4.3 on my machine. So that would mean Ruby 1.9 is like super-sonic ultra fast. At least on this benchmark. Which is mostly testing the speed of the sorting routine which I suppose is written in C. So what are we talking about, anyway? I'll shut up now.


it seems that the old 'Ruby use -> slowness' implication no longer holds.

That's a fallacy in the first place (at least for web apps)... Most web application code spends most of its time waiting for the db to respond, not number-crunching.


That does not mean performance is irrelevant, does it? Heavily used web apps benefit a lot from a fast runtime (it allows you to push the point where you have to split across servers forward quite a bit), and being able to do CPU-intensive stuff in the same environment when you need to instead of having to bust out the C or Java or whatever is very pleasant.


You can do CPU-intensive tasks using a general purpose Python API with SciPy/NumPy. Yes, it's implemented in C.


Actually I've heard of a number of Rails apps that are CPU bound rather than being IO bound, and that's with the database hosted on a separate server.


Those numbers do look a little funny, 22 seconds seems way to high for Ruby 1.8, on my macbook 2.6Ghz it's around 3.2 secs.

Out of curiosity, what were you doing that made your SBCL numbers so high? I've got 1.8.6 (running natively on OSX) clocked at 1.3 seconds and SBCL (linux on VMWare) at 78 ms.

(These times are with the random number generation removed, with them the SBCL time jumps to 90ms and 1.8.6 jumps to 4 seconds).


I was sorting a list, actually. Using an array + some type declarations I got .45 seconds.


from the link in the blog

=============

Wszystkie testy były wykonywane na MacBook Pro Core2 Duo 2.16GHZ, 4GB RAM i systemie Mac OS X 10.5.4 Leopard + zainstalowana Java 1.6.0_05.

==================

those Jruby and jython numbers look pretty funny, tho. Being strictly a MRI and Cpython user, i dunno if Jruby and jython are/could hit both cores.


http://shootout.alioth.debian.org/gp4/benchmark.php?test=all...

Overall it looks like Ruby 1.9 is 2-4x faster, which is impressive but not massive.


im curious about stackless python, and how do the two compare?


i heard callcc and eval were on the table for modifications due to their effects on performance. anyone familiar with the details?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: