Hacker News new | past | comments | ask | show | jobs | submit login

Rap Genius cofounder:

> The simulation was a bit of a stretch because the supposed number of servers you need to achieve "equivalent" performance is highly dependent on how slow your worst case performance is, and if your worst case isn't that bad the numbers look a lot better

It's still pretty bad. Here's a graph of the relative performances of the different routing strategies when your response times are much better (50%: 50ms, 99%: 537ms, 99.9%: 898ms)

http://s3.amazonaws.com/rapgenius/1360871196_routerstyles_fa...

See http://rapgenius.com/1504222 for more




Disclaimer: I've never used heroku :)

if I understand the chart correctly, using unicorn with two workers gets you pretty close to intelligent routing with no intelligence. I imagine adding up to three or four would make things even better... I don't know about puma/thin etc where you can perhaps crank it even further without too much memory tax(?)

To me this seems like the easiest approach for everybody concerned. Heroku can keep using random routing without adding complexity, and most users will not get affected if they are able to split the workload on the dyno-level.

On a slight tangent: On the rails app I'm working on I'm trying to religiously offload anything that might block or take too long to a resque task. It's not always feasible, but I think it's a good common-sense approach to try to avoid bottlenecks.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: