How is it like LuaJIT? The calling convention thing means they have a method JIT, the assembly isn't hand-written, and I don't anything in the commit message that relates to this list of LuaJIT's innovations: http://article.gmane.org/gmane.comp.lang.lua.general/58908
I'd be curious in seeing more JITs using crankshaft's approach: generate machine code directly, that both executes the original js and gathers runtime feedback.
The "old" baseline jit (the middle tier) is a simple "method" hit in the same vein as V8, it executes directly and accumulates runtime type info -- the still takes longer to generate and uses more memory than the interpreter. A lot of short run programs take less time to run than to do codegen for (JSONP for instance can do really unpleasant things to codegen time)
Tricky question: We try to keep all patches small, as the larger a patch is the longer it will take to get reviewed, and the longer the review takes, and the more update-review-update-review... cycles are required.
There are some things (like this patch) where the core change (in this case a new execution engine) can't be landed in pieces (either you have a complete working execution engine that will pass all tests, and won't regress performance, or you don't). But even in these cases we try to reduce the size and complexity of the core patch. If you look at the recent commits to JSC you'll see a bunch of refactoring landing prior to LLInt that got the non-LLInt specific code into the tree in advance of LLInt itself. As it is the initial commit only has support for our 32bit encoding, and doesn't support all the backends that the JSC JIT supports. The basic reason for this is that the static assembler does need to do some register allocation and the easy way to ensure that it's correct is to through it at 32 bit x86. Our value representation requires 2 registers on 32 bit platforms, and x86-32 has very few GPRs we can use, so it acts as an excellent stress test vs. any other supported platform.
If you actually look at the commit logs for javascriptcore you see all development goes on in public, all patches follow the webkit review rules (you can't do an unreviewed code dump, and large code dumps aren't pleasant when you need reviews). JSC is developed in the open, everyone can see the work we're doing, and the progress we're making, as we're making it. If you had actually followed the link you would have seen a change log pointing to a bug on bugs.webkit.org that shows the huge amount of (public) work that preceded actually landing this https://bugs.webkit.org/show_bug.cgi?id=75812 . Additionally there was substantial refactoring in multiple commits leading up to the final landing of LLInt in order to reduce the patch size as much as possible.
The reason the LLInt landed as a "code dump" is because that is the minimal size possible - you can't land part of an execution engine, as by definition part of an engine is not sufficient to pass tests. As it is, this is only 32 bit for the simple reason that 32-bit x86 is the easiest way to stress register allocation, our value encoding requires two 32bit registers per value and 32bit x86 has very few registers.
TLDR; You can't really do massive codedumps in the webkit repository, the commit rules make it very hard. Alas some patches are intrinsically large though as a partial patch won't be sufficient to pass tests, but even then we don't like them.
There are various tool scripts in WebKit written in a collection of basically every common script languages - perl, python, ruby, shell(!!!) are the ones i can think of off the top of my head.
This change only affects JavaScriptCore, the original WebKit JS engine. (Various iterations of JavaScriptCore have been promoted as "Squirrelfish" and "Squirrelfish Extreme" and marketed by Apple as "Nitro", so the nomenclature around this engine is a bit confusing.)
Chrome uses V8 instead of JavaScriptCore, and I don't think anything would make them change, since having their own VM is such a large asset for Google (for example, they can use V8 to promote Dart).
JavaScriptCore actually has several JITs. There was the simple JIT that translates JS bytecode to machine code (I think it's called "method JIT"). There's a more advanced JIT that does data flow analysis. This is fairly new; I think it shipped last year in 64-bit Safari.
This new development is about improving performance on code that may not need to be JIT-compiled. Instead of always going to the JIT, they now use a fast interpreter (called LLInt), and only do the native compilation if a code path is deemed hot.
HP just released the source code for their Isis browser for webOS on GitHub and their comments about it all trumpet its use of Javascriptcore (and QtWebKit). They appear to be using code from around r104935 and it's likely they'll want to pull in the LLInt/three-tier VM.
It's newer than anything they've shipped (actually just about everything is, which is a nice change of pace) and we're all waiting like kids on a sugar buzz for the HP-supported effort by the homebrew community to get all the new pieces out through Preware. I will be amused if this hits webOS as an official release before any of the other tablet OSes, but HP seems to be taking the Dadaist approach to the platform.
As stated in the commit logs, this doesn't improve performance on any of the relevant benchmarks, and V8 is still generally king of those (although V8 has been losing to SpiderMonkey on SunSpider for some time now).