Hacker News new | past | comments | ask | show | jobs | submit login
Emacs Lisp JIT Compiler (lists.gnu.org)
172 points by _pvxk on Aug 14, 2018 | hide | past | favorite | 95 comments



I hate to be "that guy," but what happened to the Guile-Emacs project? Guile performs way better than Elisp too, but (perhaps more importantly) raises the possibility of writing configuration and extensions in Lua and Scheme.


IMHO switching to Guile from C is moving in the wrong direction: rather than going from a poor-but-performant systems programming language like C to a pedagogic language like Scheme, it’d be a better idea to move to an rich-and-performant language meant for industrial systems programming like Common Lisp. Scheme’s a pretty little language, but it’s not meant for building large applications. Indeed, elisp’s weaknesses are IMHO mainly where it’s most akin to Scheme (e.g. its flat namespace).

I also think it’d be a bad idea to make it too easy to write emacs extensions in anything but a Lisp, because that would fragment the ecosystem. If we’re going to replace elisp, it should be with a better Lisp, not with a lesser language (e.g. Lua, Python, Ruby … whatever).

The great thing about a JIT is that we get a speedup without changing the actual implementation language or fragmenting the ecosystem.


The idea was to replace Emacs's embedded Elisp interpreter with Guile's, not to rewrite Emacs in Guile Scheme.


Why not just replace Emacs's embedded elisp interpreter with one written in Common Lisp? CLOCC has a basic elisp interpreter already: https://sourceforge.net/p/clocc/hg/ci/default/tree/src/cllib...

Take that, put a little work into it and one could presumably replace the C core of Emacs with Common Lisp.


Stallman hates Common Lisp.

https://www.gnu.org/gnu/rms-lisp.en.html


Thanks, that was a fun read. BTW, I don’t think Richard totally dislikes Common Lisp: years ago I got an email from him about releasing my ancient Springer-Verlag Common Lisp book under the FSF documentation license. Unfortunately I couldn’t do it because I didn’t have the manuscript files.


If you own the copyright to the text couldn't you make it known that if someone were to OCR it, clean up the formatting, and publish it that the result could be freely distributed?


I guess in a fair and honest world, yes because it's legal?

But in our actual world, it's also legal but publishers may play dirty and somehow claim rights because you published something based on their edition?

Just thinking out loud.


> I don’t think Richard totally dislikes Common Lisp

maybe not totally, but mostly. Beyond that he does not care about it and isn't interested in it.


If you were already an "advanced Lisper" in the 1980's, Common Lisp might be something like C11 to someone who had expert C chops before even C99.

I code in C regularly, yet I do not care and am not interested in C after C90. Only some library features of C99 and that's about it.

Stallman's concepts of what is Lisp, how to use Lisp, how to implement Lisp, were formed long before Common Lisp.

If you're in that position, it's easy to have allergic reactions to requirements you don't agree with.


Kind of surprising, since Stallman was a developer with and user of Lisp Machine Lisp, the single biggest influence of Common Lisp.

Maclisp and its successor Lisp Machine Lisp, were probably his biggest influences. Emacs Lisp is based on Maclisp - without the features of Lisp Machine Lisp like keyword arguments, Flavors as the object-system, ...


"Common Lisp Modules" by any chance?


The linked transcript doesn’t seem to indicate he hates it. I read that he just found some parts of CL to not he “lispy”, like keyword arguments.


> I implemented Common Lisp once on the Lisp machine, and I'm not all that happy with it. One thing I don't like terribly much is keyword arguments (8). They don't seem quite Lispy to me; I'll do it sometimes but I minimize the times when I do that.

> [...]

> So we started to make Scheme the standard extensibility language for GNU. Not Common Lisp, because it was too large.

I'd hardly say he hates it based solely on that essay.


Wow, I so agree with that and the (8) footnote: "I don't mind if a very complex and heavyweight function takes keyword arguments. What bothers me is making simple basic functions such as “member” use them."

I kept keyword arguments out of TXR Lisp. That is, out of the function call mechanism. It doesn't seem like a good idea to deal with a dictionary like mechanism in function calls.

I invented something called "parameter list macros". A parameter list macro can be written which adds keyword argument ability to a function (and such a macro is provided). You can do (lambda (:key a b c -- x y) ...) and now you have keyword parameters x and y. They support the (sym dfl-val present-p) syntax and all. The :key parameter macro implements them via a source transformation. :key is able to rewrite both the parameter list and the body of the function to make this work.

One reason Common Lisp programmers use keyword macros is that there is no way to invoke the default value of an optional parameter which precedes another optional parameter to which an argument is being specified. If a function has optional arguments X and Y, the caller cannot default X if it specifies Y.

I fixed that in TXR Lisp: the symbol : (colon) can be passed to an optional parameter to activate its default value. So if a binary function f has two optional parameters we can invoke (f : 3) to default the first optional, and specify the second one as 3. This special feature is only supported in function calls, not in macro/destructuring parameter lists.

That : symbol also serves as the &optional keyword, separating the required parameters from the optionals. It harmonizes with the consing dot that demarcates the &rest parameter: (lambda (x y : z w . r) ...): x y required, z w optional, r rest. The : symbol is nothing more than the symbol named "" (empty string) in the keyword package. It's provides a very useful third value to the nil and t duo. Common Lisps have this symbol! Unfortunately, they tend to print it funny back at you like :||, and neglect to get any mileage out of its notational convenience.

As for those member functions, I provide a memq, memql and memqual that use the three different equalities. The more general member function takes a key and test function, but not as keyword arguments but positional optional parameters. If you want the default test, but custom key: [member foo bar : mykey].


He just thought it was "too big" at the time. That may no longer be the case.


The Guile VM was deliberately designed to support multiple languages. Guile Emacs was an attempt to run Elisp on Guile's VM, not rewrite Emacs in Scheme. Guile's VM is also written in C FWIW


Guile Scheme is a pretty nice language. It is a superset of r6rs and probably provides att bigger stdlib than ansi CL (with a proper module system with all the introspection you are used to from CL)

The reason for guile Emacs is that guile is the official extension language of the GNU project. It is a very capable implementation, and the elisp implementation is a lot cleaner than the one in Emacs.

You would also gain a lot of features not available in elisp. proper threading (with both pthreads and fibers, delimited continuations, and much more).

Not only that, you would get a runtime for elisp that works outside Emacs. Imagine being able to write programs and just load guile-org-mode and be able to script the org process without invoking Emacs.


> rather than going from a poor-but-performant systems programming language like C to a pedagogic language like Scheme, it’d be a better idea to move to an rich-and-performant language meant for industrial systems programming like Common Lisp.> Scheme’s a pretty little language, but it’s not meant for building large applications.

Weren't people saying the same about python and Java long ago?

How do you arrive at that conclusion?

> Indeed, elisp’s weaknesses are IMHO mainly where it’s most akin to Scheme (e.g. its flat namespace)

Racket at least wasnt flat iirc, is guile scheme?

> The great thing about a JIT is that we get a speedup without changing the actual implementation language or fragmenting the ecosystem.

Agreed.


> > Scheme’s a pretty little language, but it’s not meant for building large applications.

> Weren't people saying the same about python and Java long ago?

I spent roughly a decade as a professional Python developer, and in retrospect I’d probably agree that it’s not well-suited to large applications. Why, exactly, is for another discussion.

As for Java — it’s certainly not meant for building small ones!

> How do you arrive at that conclusion?

The same way that the Scheme committee did, when they came up with R6RS: by trying to use Scheme for large systems.

It lacks features which aid building large systems (e.g. namespaces); it lacks features which enable building industrial-strength systems (e.g. the Lisp condition system or CLOS); it has features which make code more complex and tend to hinder performance (e.g. call/cc); and it even has a broken feature (dynamic-wind).

Then there are things like: conflating functions, variables & all other names; breaking NIL into NIL, () & #f. Those are partly a matter of taste, but I think also an indication of Lisp’s pragmatic nature: in practice, doing things the Lisp way is better, even though in theory doing them the Scheme way is.

> Racket at least wasnt flat iirc, is guile scheme?

Racket isn’t Scheme anymore. That’s not a bad thing, and indeed Racket is pretty amazing. I wish that the same amount of effort had been expended making Lisp better, but everyone’s gotta scratch his own itch.


Well, in guile you get goops which is pretty much a clone of clos, call/cc is being phased out in guile since delimited continuations are the new black, unwind-protect is not enough in a language where you have first class continuations so the complexity of dynimic-wind is needed. For your escape continuation needs you can implement unwind-protect and very easily abstract away the parts of dynamic-wind you don't need.

The only thing missing is really the condition system, but that could be bolted on (that is the correct term) using continuations (preferably delimited). There is even a suggestion for r7rs-large to include them. I hope it gets voted in.


I wrote a comment here over a year ago summarizing some of the state of it, it seems like there's been no changes since then: https://news.ycombinator.com/item?id=13885682


> I hate to be "that guy," but what happened to the Guile-Emacs project? Guile performs way better than Elisp too

I’m sorry to be that guy, but did you ever try it out for yourself?

Just starting Emacs took as long as a full regular Emacs elc-build, if not longer.

So not exactly faster by a universal standard.


The guile-emacs branch afaik did not use precompiled byte-code. So startup took ages.

Guile itself is making huge progress. There's ongoing work for JIT compilation as well. Andy Wingo drives this work. And he brings a lot of experience from work on the V8 Javascript engine.

I see no technical reason why guile-elisp should be any slower than normal elisp. It's just that some of the optimizations have not yet been put into place. After a switchover all the improvements from Guile will come for free in the future.

See the branches named "master", "lightning" and "wip-elisp" here: http://git.savannah.gnu.org/cgit/guile.git


Guile being faster at running Elisp doesn't necessarily mean that Emacs + Guile is faster than Emacs. Is that why the project stalled out?


I think it was issues improving startup performance + largely that it was a one man show.

It doesn’t take much to terminally derail an effort driven by only a single contributor.


How did swapping out the Elisp interpreter for a Guile interpreter hurt performance so bad? Seems like it should be a pretty benign change.


The work isn't finished, it runs but that's it. Comparisons aren't meaningful in the current state. Besides the bytecode thing, there was also an issue with string conversions that afair wasn't solved.


Emacs pre-compiles byte code, I'd imagine that the guile version didn't do that by default.


I think the primary contributor stopped work on it, no one else took over, and most users don't perceive its benefits outweigh its switching costs.


Lack of volunteers.


Too slow strings still


That's quite a blast from the past.

Good on Aleksey for picking it up and modernizing libjit to this point.

And for everyone who thinks of LLVM in this context, here's a thread with some "primary sources" [1].

[1] - http://lists.gnu.org/archive/html/dotgnu-libjit/2004-05/msg0...


Thanks, Gopal. I didn't do much, just maintained it on life support. Recently some good contributors arrived: Jakob and Tom. Let's hope this really revives the project.


AFAIK, llvm jit is incredibly slow, to the point that certain emulators that have used it have had to drop it for performance's sake.


IIRC LLVM relatively recently did a major overhaul of the JIT API, including better support for lazy compilation. That might improve compilation speed drastically


Oh wow. I haven't seen those names in a loooong time. I used to lurk and follow the DotGNU mailing lists avidly :-)


Don't miss the later comments by Richard Stallman (creator of Emacs), and Tom's reply:

  Richard> I don't think a 3% speedup is worth those drawbacks.  
  Richard> Or even a 10% speedup. 
  Richard> A really big speedup would justify the costs.
  
  Tom> It is 3x, not 3%.
Mic drop.


More context:

In some simple benchmarks, it is about 3x faster than the bytecode interpreter.

I'm always skeptical of statements of these, because workloads vary so much.

JITs seem to do well for numerical benchmarks, e.g. summing a list of numbers or the mandelbrot fractal.

They seem to do worse with string-based workloads, because the bottleneck is in memory allocations, and I have yet to see a JIT that does anything about that (i.e. analyzing code to reduce allocations).

I imagine that ELisp is used mostly for string workloads and not numeric workloads. So I won't be surprised if the 3x number doesn't hold up. I'm interested in hearing more details and happy be to be corrected.


>I have yet to see a JIT that does anything about that (i.e. analyzing code to reduce allocations).

Most JITs do this. LuaJIT does allocation sinking of tables, strings, and even C-FFI structs. HotSpot does escape analysis of everything. IIRC Graal can even do partial escape analysis (allocate the object on the stack and then copy it to the heap if it escapes on one code path). I imagine the major JavaScript engines are similar. It's a well-known performance optimization.


Yes I was wrong about that [1], but now I'm interested in any benchmarks / performance evaluation of escape analysis in JITs :)

From what I gather, it's a lot more important in PyPy because integers and floats are boxed!

[1] https://news.ycombinator.com/item?id=17761452


Any source in how escape analysis is implemented? Is only usefull for JIT or also for "normal" bytecode interpreter?


Traditional escape analysis uses algorithms like equi-escape sets.

https://www.usenix.org/legacy/events/vee05/full_papers/p111-...


> I have yet to see a JIT that does anything about that (i.e. analyzing code to reduce allocations)

You mean you've never seen a JIT that does anything about memory allocations for ELISP? Or do you mean you've never seen a JIT do anything at all about memory allocations?

Because removing memory allocations through escape analysis and scalar replacement is a key feature of any sophisticated JIT, and there are definitely many JITs which do this.

The JIT for Ruby I work on will effectively remove the allocation of string objects.


Hm yes I was wrong about that. It looks like v8 and PyPy do it (which is apparent from some Googling and grepping of the code).

I have been reading some papers on JITs and I don't see escape analysis mentioned that often. In the PyPy paper (which is over a decade old) they mention it as future work.

Still, I actually tried PyPy on a string-based workload and it was slower than CPython and used more memory. I don't know why but that contributes to my feeling that JITs are bad for string-based workloads.

I'm interested in seeing any pointers to benchmarks that show the improvements resulting from escape analysis in JITs. I haven't seen anything like that and I've done a decent amount of research.

A cursory look at this blog post makes me think it's not super straightforward:

https://v8project.blogspot.com/2017/09/disabling-escape-anal...

That post is less than a year old! i.e. the fact that v8 has been around for 10+ years and they're still updating escape analysis makes me wonder what the issue with it is. Is it hard to implement or does it not produce that much speedup? I appreciate any pointers.


> I'm interested in seeing any pointers to benchmarks that show the improvements resulting from escape analysis in JITs. I haven't seen anything like that and I've done a decent amount of research.

If you've done a decent amount of research in the field of JITs and you aren't aware of what escape analysis achieves in practice then I'm very surprised.

http://www.ssw.uni-linz.ac.at/Research/Papers/Stadler14/Stad...

That paper is relatively recent, so you can follow the chain of papers from its references.


>Still, I actually tried PyPy on a string-based workload and it was slower than CPython and used more memory. I don't know why but that contributes to my feeling that JITs are bad for string-based workloads.

Now that I'm implementing a interpreter for a relational language and from what I know for why python is slow:

https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow...

And also:

https://speakerdeck.com/alex/why-python-ruby-and-javascript-...

Is challenging to be dynamic and also fast. So, you need to design the language/runtime with performance in mind, or at least, to minimize what could be very slow (that is what I'm triying).


The development of libjit almost halted a while back. But thanks to the last GSoC it is set to receive a major update soon: https://github.com/ademakov/libjit/pull/14


Is it about the new register allocator? How good is it?


Yes, initial benchmark results: https://github.com/M4GNV5/GSoC2018


> From: Tom Tromey

Of course it is. That guy's an amazing wizard.


The authors blog has some extra details (see also the comments)

https://tromey.com/blog/?p=982


Tangentially related:

applying some commonly-recommended optimizations, you can shave Emacs init time to about 2 seconds, from whatever your previous init time was (7 in my case, but I also read about 60s -> 2s improvements).

Init time alone greatly affects how a given system is perceived.


If you start Emacs with `emacs --no-init-file --no-site-file --no-splash` it starts (especially in non-GUI mode) in a fraction of a second.

If you then write your init file(s) in a certain way (use autoloads) you can maintain this performance while retaining all your customizations.

The reason almost nobody does this, e.g. I don't, my Emacs starts up in many seconds, is because the workflow with Emacs for those who use it is not to be restarting it all the time.

I don't reboot my computer either every time I need to open a new browser tab, and I don't need to restart Emacs every time I need to open a new file, so I really don't care if it takes 10 seconds to start it.


> If you start Emacs with `emacs --no-init-file --no-site-file --no-splash` it starts (especially in non-GUI mode) in a fraction of a second.

Sure thing, but most real-world setups have multiple heavy packages installed.

> If you then write your init file(s) in a certain way (use autoloads) you can maintain this performance while retainng all your customization.

Worth noting that autoloading is useful up to a point - after a threshold, the gains are only theoretical (your reported startup was 0.1s, but immediately after Emacs startup, it froze 2 seconds because all the autoloads are now doing its work).

In the end when I open Emacs I want a bunch of stuff to happen (open files, color them with syntax, etc) - some work has to be performed sooner or later.

> the workflow with Emacs for those who use it is not to be restarting it all the time.

I start it a few times a day, for not accumulating state, particularly that related with Clojure nREPL connections. Not a costly operation.


> most real-world setups have multiple heavy packages installed.

Sure, I wouldn't use it in this mode, but this line to the effect of "it takes seconds to start up" is usually uttered by people who are more used to the likes of nano or vi. You can also use those command-line options to start Emacs in that sort of one-off mode.

> your reported startup was 0.1s, but immediately after Emacs startup, it froze 2 seconds.

That's odd, I can start `emacs --no-init-file --no-site-file --no-splash -nw <file>` and write something to the file, C-x C-s C-x C-c to write + exit with no more noticeable delay than doing the same with mg, nano or vim on my system. This is with Emacs 25.2.2.

> I want a bunch of stuff to happen (open files, color them with syntax.

Opening a file to have it syntax-colored takes less than second (feels like at most 1/8 of second) in that mode.

> I start it a few times a day[...]

Doesn't Clojure have some equivalent of M-x tramp-cleanup-all-connections? Occasionally I'll mass-close buffers, but the uptime of Emacs tends to be 1=1 my laptop, usually about a month (for security updates and the like).

I believe this mode of use is more typical than restarting it this often.


> That's odd... I meant, for regular usage (with full packages), some people might pride themselves over a 0.1 startup, but then as soon as Emacs does something useful it will freeze a little.

> Opening a file...

Maybe my example wasn't clear enough. When I open Emacs a lot of functionality will be 'there', as it happens with an IDE. Think of a project tree, terminal, misc functionality, and unavoidable dependencies.

> Doesn't Clojure...

The Clojure(Script) tooling stack has many layers from different authors, so it's reasonable to not trust things to work after several connect/disconnect cycles.


Users who want fast startup simply load emacs-server once on system startup and run the emacsclient when needed. Emacs starts in a jiffy when done this way. Of course this probably will not help you with resetting accumulated states.


I really don't understand this concern.

My emacs gets restarted when my laptop reboots. Which happens on a timescale of months. I honestly don't care if it takes minutes to start, so long as it's super responsive once it's running.

I feel like maybe something's been missed if you're starting emacs all the time. But then, I feel the same way about Linux, and clearly many many people disagree with me.

So the comment in that thread about AOT compilation was extra interesting for me :-)


It’s worth noting that Emacs’ built in lisp libraries are sort-of aot compiled: Emacs loads all the Lisp into memory, evaluating some initialisation stuff, then it does a gc, then it does a very scary “unexec” operation which has been in the source for around 30 years with most people so scared of the function that they only poke at it enough to make it just-about work. The function takes the current contents of memory and produces an ELF from it which will start up with the lisp libraries already loaded into memory.


There's a WIP portable dumper that hasn't been merged yet which'll replace that. This LWN article has a good summary of it: https://lwn.net/Articles/707615/


On the other hand one I don’t really care about init time at all and I think many users don’t care for the same reason. I start Emacs after a reboot (roughly once every three weeks) so even if it takes a minute to start, I would be optimising ~0.01% of my Emacs-using time. If it took ~60s and I reduced that to 2s then this optimisation would save less than 1.5hours over five years. But also that Emacs-starting time is spent reading email not waiting for Emacs so it wouldn’t bother me much if it took ~10min


Probably there's two kind of devs (none being 'superior'): those who are paranoid about keeping things stateless, and those who don't.

A clear example being restarting your computer. Sometimes I do it for no good reason other than knowing that state will be pristine when next time I boot it.


What are these recommended optimizations?


This section from the Doom Emacs distribution's FAQ covers some of the recommendations: https://github.com/hlissner/doom-emacs/wiki/FAQ#how-is-dooms...


As far as I know, things like: changing gc-cons-threshold, setting file-name-handler-alist to nil, adjusting some jit-lock-* values. And using deferred package loading.


The extreme version is to reproduce a step from building Emacs where you produce a binary with the libraries you want already loaded into memory. This is like making a lisp-image except the image also includes a big C text editor


This sounds pretty neat (and something a CI server could routinely do) - has it been done in practice?


Doing this requires various security features that operating systems have gained since the 1980s to be turned off so it’s not super advisable if you can avoid it


Cool! I just compiled it. How do I know whether the jit is enabled / I wonder how to convince myself what the speedup is for different usages.

  git clone git@github.com:emacs-mirror/emacs.git
  cd emacs
  git checkout feature/libjit 

  # Instructions for macos from
  # https://stuff-things.net/2018/01/30/building-emacs-25-on-macos-high-sierra/
  brew install autoconf automake texinfo
  export PATH="/usr/local/opt/texinfo/bin:$PATH"
  
  make configure
  ./configure --with-ns
  make install
  
  open nextstep/Emacs.app


Absolutely love the message from stallman and the response (paraphrased below, full quote [0]):

Richard> I don't think a 3% speedup is worth those drawbacks [ed: added complexity]. Or even a 10% speedup. A really big speedup would justify the costs.

Tom> It is 3x, not 3%.

[0] https://lists.gnu.org/archive/html/emacs-devel/2018-08/msg00...


The code is in branch feature/libjit of emacs' repo, if anyone wants to take a look. http://git.savannah.gnu.org/cgit/emacs.git/tree/?h=feature/l...


Wouldn't it be nice if ELisp were implemented in RPython? What sort of difficulties would this face over the JIT compiler in the link? It would have the advantage that such an ELisp implementation would still be an interpreter (annotated and auto-JITted). Presumably, that would keep it "simple".


With JIT compilation, wouldn't byte compilation become redundant? If yes, a JIT could significantly reduce the complexity of Emacs (contrary to RMS's comment).


> wouldn't byte compilation become redundant?

Bitcode is still useful, even with a JIT (just like in Java) - the JIT needs to do the heavy parsing, semantic analysis etc everytime it starts.

However, the JIT means it can execute the bitcode faster than an interpreter can, because it can keep more intermediate steps in registers directly (like spill to an xmm register, instead of heap) & can do a tiny bit of cpu guided optimization (like is AES-NI available etc).


Make my emacs faster, yes please!


The reason it's difficult to make elisp fast is because the language defaults to dynamic scoping rather than lexical scoping. That means any function that uses `let` is no longer making a tailcall. Meaning you have to restore the value of each variable after you call whatever function would have been the tailcall.

Stuff like that adds up. It's why Lua is so fast compared to JS, too.


Lexical binding was available starting in Emacs 24.1, and most of the standard library has it enabled.


But this could be solved with escape analysis. Rarely do such let locals shadow outer locals, and as such don't need to be restored.


You can’t know whether you are shadowing an outer variable. With dynamic scope:

  (defun get-x () x)
  (defun foo (let ((x 7)) (get-x)))
  (foo) ;; => 7 (not shadowing x)
  (let ((x 4)) (foo)) ;; => 7 (is shadowing x)
  (setq old-get-x #'get-x)
  (defun get-x () (let ((x 10)) (funcall old-get-x)))
  (foo) ;; => 10


Emacs Lisp has supported lexical scope in its byte code compiler as an option for eons, and is starting to make more use of that. No reason why a brand new JIT couldn't follow suit.


Ouch, you are right. So the only strategy would be to switch to lexical compilation blockwise, while fixing the remaining dynamic extent vars.


Which is more or less the strategy in the Emacs codebase - the intention is to incrementally port to lexical binding on a per-file basis. (This is seen through the annotation -- lexical-binding:t -- at the top of many .el files.)


This indicates that node (v8 js) is much faster than lua, contrary to what you just said:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


As others have mentioned, luajit is a lot faster than lua:

http://luajit.org/performance_x86.html

As an aside, I see there a gitlab project for the benchmark game - but I was surprised there doesn't appear anyone has put together an automated, open, bring your own language/code/patches version? Maybe with some community voting (eg: prefer idiomatic vs max speed)?

https://salsa.debian.org/benchmarksgame-team/benchmarksgame/...


> …but I was surprised there doesn't appear anyone has put together…

Why haven't you? That's probably why no one else has :-)

Instead be surprised that someone has continued to push the benchmarks game along, and that people have continued to contribute programs gratis.


It's been on my todo-list over neat ideas for a while. I just assumed someone would've beaten me to it by now :)

I suspect it might have to do with resources - dedicating multiple cores to benchmarking isn't going to be easy to do for free. Might be feasible for low cost, though.


Probably thinking of LuaJIT, not measured by this benchmark


Is that using the Lua interpreter, or LuaJIT?


That webpage says Lua 5.3.4 and seems like LuaJIT does not support Lua 5.3.4


Cool to see that libjit can actually help with elisp, I've not really seen many other uses of libjit.


https://lists.gnu.org/archive/html/emacs-devel/2018-08/msg00...

>To replace an interpreter by a JIT compiler means more oomplexity and > also more possible problems. (For example, if there are platforms > someday that libjit does not support.) Reading a Lisp interpreter is > very useful for learning.

> If the plan is to add a jit and keep the Lisp interpreter as well, > we don't lose its advantages for study, and we can still support all > the platforms -- but we add complexity even more.

> I don't think a 3% speedup is worth those drawbacks. Or even a 10% > speedup. A really big speedup would justify the costs.

Why do they still let RMS allows to say something about technical today?

No wonder Emacs is losing the users.


While sometimes overruling he's completely reasonable here. There had been multiple approaches to jit-ing emacs bytecode, and all of them offerred a very limited speed-up.

Jitting engines bring extra dependencies and one more layer of complexity to the C core of the editor. And not too many editors support a full language of their own. Notice that Emacs is mostly a volunteer effort and the resources are very limited.

For example this particular engine seems to offer a good speed boost on numeric operations but one of the replies mentions that realistic complex code is sometimes slower than a pure byte-code VM. Probably a bug but still...

After all those failed attempts certain level of scepticism is understandable.


A 3% increase in speed is a miniscule improvement. Emacs has a small dedicated group of maintainers, increasing the maintenance burden will stop other developements (such as the recent addition of async which is a much bigger performance issue). If emacs is losing users I would guess it is because to make it an IDE takes more configuration, and has more issues than competing products.


it's 3x, 3 times, not 3%




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: