Hacker News new | past | comments | ask | show | jobs | submit login

Your result is very surprising probably because you benchmarked on the very small example corpus whereas other languages were benchmarked on a much much bigger corpus



Also not entirely sure how one plans to compare runtime performance when the original article didn't really describe the hardware it ran on that I could see skimming it over. If you can get better runtime performance results and run against ancient hardware of the same architecture, you may be able to assume you're below the lower bound of the benchmark system explored. If your runtimes are better in that case, you may be able to place high confidence that the LISP example in this case actually is more performent. Definitely would run it against the same inputs they used since they provide it and describe how you can easily get/derive the input (some Project Gutenberg ebook concatenated 10 times).

I suppose you could also recreate all their examples to create your own baseline of runtime performance but that's a lot of work for what seems to be a not-very empirical benchmark (at least to me).

Disclaimer: I did not check runtime complexity of any of the implementations because I didn't really care and skipped straight to the performance results table.




On x10 (no optimization yet so consing is killing performance):

firmament: 10

firmament, 10

genesis 10

version 10

Evaluation took:

  2.798 seconds of real time

  2.813069 seconds of total run time (2.677126 user, 0.135943 system)

  100.54% CPU

  8,125,595,437 processor cycles

  934,110,048 bytes consed


It's just testing the output. For benchmark each variant is run 5 times with kjvbible.txt as input and then the lowest execution time is used as benchmark.


In that case, unoptimized lisp for kjvbible.txt at 0.365 is 3rd fastest beating C


Do you have the same exact hardware that the article is using? Otherwise, we can't say one way or another on that with the data you've provided.

I'm also pretty certain the article is benchmarking against the 10x copy file for the actual benchmarks.

See this example command in the article:

    time $PROGRAM <kjvbible_x10.txt >/dev/null
So, even if you had the exact same hardware, I'm pretty sure your program would only be a bit faster than the unoptimized C# version. However, it's possible that your machine is a lot slower than what's used in the article, and your program is actually pretty fast -- but without more points of comparison, we just don't know. You haven't run the other benchmark programs on your hardware and posted the results.


It would be interesting to have the article author run the lisp code on their machine for a real comparison - would be very interested to see the results. My machine is a Linux (Ubuntu) laptop. I just sent an email to the author with the common lisp code - we'll se if he's interested enough to check.


It seems like he did. (Or may be someone else's. I didn't check the repo.) Common Lisp is the second slowest on his results table.


Missed the sample set from the example. on the kjvbible.txt file with no optimization: ... WEB> (time (performance-count "/home/frederick/kjvbible.txt"))

the 64015

and 51313

of 34634

26879

to 13567

that 12784

in 12503

he 10261

shall 9838

unto 8987

for 8810

i 8708

... Evaluation took:

  0.365 seconds of real time

  0.370322 seconds of total run time (0.338364 user, 0.031958 system)

  101.37% CPU

  1,060,005,621 processor cycles

  106,297,040 bytes consed
I had a bug in my original code which alphabetized rather than sorted by count, so the sort line should be:

((keys (sort (alexandria:hash-table-keys map) (lambda(x y)(> (gethash x map)(gethash y map))))))




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: