https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
For instance, the page you linked has “pidigits” at the top, and says node is faster, 2.58s vs 3.61.
2.58s is the slowest run of the fastest pidigits on the node page, but one of its runs took 1.04 seconds.
The perl page lists a 1.24 second run for “pidigits 2”.
The reported numbers in the language comparisons don’t seem to be averages.
All the pidigits programs list the same output, so presumably, they’re running with the same ‘N’.
Between the variance and inexplicable stats being applied to the results, I’m not sure what to conclude from these numbers.
No, there really isn't.
> 2.58s is the slowest run of the fastest pidigits on the node page, but one of its runs took 1.04 seconds.
Notice column N — 2,000 6,000 10,000.
That's a command line argument passed to each program, controlling how many digits of pi are generated — the workload.
So, 2.58s for 10,000 digits and 1.04s for 6,000.
(And as it says, there can be a cold caches effect on the first measurements.)
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
For instance, the page you linked has “pidigits” at the top, and says node is faster, 2.58s vs 3.61.
2.58s is the slowest run of the fastest pidigits on the node page, but one of its runs took 1.04 seconds.
The perl page lists a 1.24 second run for “pidigits 2”.
The reported numbers in the language comparisons don’t seem to be averages.
All the pidigits programs list the same output, so presumably, they’re running with the same ‘N’.
Between the variance and inexplicable stats being applied to the results, I’m not sure what to conclude from these numbers.