On a silly piece of code that nobody would ever have any use for. I have tried PyPy for "real" data and numerical tasks from time to time, and never have I noticed any sort of speedup. Usually it's slower than CPython. Perhaps this latest version will be different, who knows.
I'm using it in production, and speedups tend to be on the order of 4-5x for my app (the compute-intensive part involves hierarchical agglomerative clustering of documents by text similarity, so it's data/numbers-heavy). Obviously it'll depend on your individual application (and non-CPU-bound tasks won't benefit much), but we switched to PyPy because it showed major improvements in profiling of our app on production data (and we switched around PyPy's 1.9 release, so it's even better now). It's not like everyone's just imagining the speed improvements...
I've just finished writing "High Performance Python" for O'Reilly (due August), we have a chapter on Lessons from the Field and one chap talks about his successful many-machine roll out of a complex production system using PyPy for a 2* overall speed gain. We also cover Numba, Cython, profiling, numpy etc - all the topics you'd expect.
Not disagreeing, but they implied that this benchmark only showed a speed improvement because it's a toy, and that real workloads with real data are usually slower. That hasn't been the case in my experience.
You might try again since things have changed. If you don't get any kind of speedup, the PyPy project would likely consider it a bug and it would be helpful to document that it was slower. Please consider finding some way of reporting the specific measurable issues you find!