Hacker News new | past | comments | ask | show | jobs | submit login

There’s something fishy about that report. It says it is presenting the fastest programs, but when I click through to “all perl programs”, there are faster (often by 1-2 orders of magnitude) programs / runs:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

For instance, the page you linked has “pidigits” at the top, and says node is faster, 2.58s vs 3.61.

2.58s is the slowest run of the fastest pidigits on the node page, but one of its runs took 1.04 seconds.

The perl page lists a 1.24 second run for “pidigits 2”.

The reported numbers in the language comparisons don’t seem to be averages.

All the pidigits programs list the same output, so presumably, they’re running with the same ‘N’.

Between the variance and inexplicable stats being applied to the results, I’m not sure what to conclude from these numbers.




> There’s something fishy about that report.

No, there really isn't.

> 2.58s is the slowest run of the fastest pidigits on the node page, but one of its runs took 1.04 seconds.

Notice column N — 2,000 6,000 10,000.

That's a command line argument passed to each program, controlling how many digits of pi are generated — the workload.

So, 2.58s for 10,000 digits and 1.04s for 6,000.

(And as it says, there can be a cold caches effect on the first measurements.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: