Hacker News new | past | comments | ask | show | jobs | submit login

Too bad, I was actually excited to see what kind of problem would benefit from Lisp in the 2020s



Anyone relying on a dynamic language with state of the art compilers instead of rewriting code in C and Fortran for usable performance.


I use SBCL for research in nature-inspired algorithms. Not only you can use the same fast linear algebra libraries (BLAS, Lapack) as for instance NumPy does, recently they added support for SIMD instructions so you no longer have to call C or ASM code for vector (and matrix) computations if you have only a couple of small matrices and not on the "hot path" of your program. Most of the Common Lisp implementations used nowadays produce native code (or C or LLVM code which will be natively compiled).

So you don't have to rewrite anything. Nowadays ML stuff is usually about matrix multiplications in its core so Python with NumPy (or a Cuda library) also delivers good enough performance. The native code is already there, just call it. You don't have to write it.


Yeah, on a very niche use case.


Extant lisp compilers are not 'state of the art'; that they are is a bad meme. SBCL certainly attains very usable performance, though.


They certainly are, when compared with many alternatives.


Which alternatives? Sbcl:

- Requires manual type annotations to achieve remotely reasonable performance

- Does no interesting optimisations around method dispatch

- Chokes on code which reassigns variables

- Doesn't model memory (sroa, store forwarding, alias analysis, concurrency...)

- Doesn't do code motion

- Has a decent, but not particularly good gc

Hotspot hits on all of these points.

It's true that if you hand-hold the compiler, you can get fairly reasonable machine code out of it, same as you can do with some c compilers these days. But it's 80s technology and it shows.


What language don't require type annotations to achieve good performance?, more specifically tell me any programming language that can beat sbcl at speed without using type annotations.


Not having used them, I'd expect the Truffle implementations of Python and Ruby to do well, seeing as Graal handles #2-5 of moonchild's list. From there #1 might fall out. (Apparently the fancy GCs for #6 are only in the Enterprise Edition though?)

I'm working on a better, parallel but still far from Java state-of-the-art GC for SBCL <https://zenodo.org/record/7816398> - which presumably is some skin in the game.


Look at this real-time garbage collector for C++: https://github.com/pebal/sgcl


Is there any documentation on the algorithm used?


The Mark and Sweep algorithm with tri-color marking variation was used. There is no documentation yet, but I am happy to answer your questions (snibisz@gmail.com).


Yeah, state of the art GC, but SBCL and ECL run circles on performance compared to any Java turd, even if being AOT compiled with GRAAL.

Just compare the Nyxt web browser with any Java monster out there.


Do they? I had to help SBCL with bounds checks (read: disable them) when porting the Java NonBlockingHashMap to Common Lisp. Perhaps still too micro- a benchmark, and I've indeed made turds with Spring, but the rest of HotSpot would do wonders on non-turd programs. (Would also expect Graal with JIT to be faster than AOT after warmup, but no experience there.)


self, javascript (v8), java (hotspot; generics are latently monomorphised according to hotness; also see 'invokedynamic'), apl (apl\3000)


I don't think you can say that java "doesn't use type annotations".


Well, technically you do have the var keyword now.

  var x = MyAwesomeClassFactoryBeanTemplate.getBean().getFactoryInstance().createNewMyAwesomeClass(foo, bar);


I am not super familiar with Java but that syntax typically implies type inference, which is not the same as not needing type annotations.


Not only is it type inference, it's limited to locally declared variables.


Do you have an idea/opinion on Chez Scheme and its compiler?


I haven't used it. From what I hear, it generates ok code, but compromises on codegen quality for the sake of simplicity.


It seems with the massive amounts of money and full time engineers JVM has to be far more advanced technically, but does it actually make a difference? I don't know anything about compilers I just know LuaJIT is faster than any dynamic language on the JVM, SBCL is much faster than lisp on the JVM, chez scheme is much faster then scheme on the JVM, pypy is faster than jython. For statically-typed GC languages that should be comparable to Java, Go and Ocaml are famous for having very simple and fast compilers (fast as in they compile quickly without doing much optimization) and they don't perform any worse than Java.


First of all you're missing Alegro and LispWorks.

Secondly, we are speaking about dynamic languages over here, that keep using C and Fortran.


I have LispWorks (I like the IDE) and although the hybbyist, non-commercial licenses are not out-of-reach for many (if not most) people, many (even companies) prefer running on free-of-charge stuff. My current employer doesn't even pay for PyCharm (main language at my current workplace is Python) and they are dirt cheap compared to e.g. Microsoft MSDN Subscription.

I didn't stress the garbage collector enough so I can say that LW is faster or slower than other CL implementations. I'm currently doing an algorithmic research where the strain on garbage collector is not really that big, it's mainly about numerical performance and SBCL is by far the best from what I tried - LW, SBCL, CCL, Clisp, ABCL, ECL. Garbage collector takes usually 1-2% of run time which isn't that bad. I suppose that if I stressed it more than LW garbage collector would outperform SBCL but that's only my theory based on target audience of LW.

If you have experience with Allegro and LispWorks and can compare it to other implementations, could yous hare it, please? I'm quite curious. There isn't much on the internet about this topic.


I agreed with you that sbcl has usable performance (like c and fortran compilers). I disagreed that its performance is state of the art or particularly commendable.

Are allegro and lispworks appreciably better? My understanding was that lispworks, at least, forked from cmucl a long time ago and put less work into python than sbcl did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: