The author has proven exactly this point: By going through a number of engine-specific / implementation-specific code transformations they have achieved a significant performance boost for hot code, which, for whatever reason, the JS engines themselves failed to attain with the optimization repertoire they already have.
Also, remember that JS engines are not all-powerful and all-knowing in their optimization techniques, it's still just a limited number of individually imperfect humans working on them, just like the rest of us. So naturally there are going to be opportunities for other humans to utilize and help complete the picture and increase the overall value and effectiveness.
Maybe in this case there is room specifically for a JS-syntax-level tool that also has more freedom in terms of execution time and related concerns, because it can execute at build-time, potentially pull in or bundle much more information about optimizations with it (imagine using (potentially expensive) ML model to search for probable performance wins, actually do micro- or macro-benchmarks during the build step), be maintained outside of the implementation of a JS engine and thus have the additional benefits of a potential broader contributor base and a faster and more focused release cycle, etc. Or this may not be a good idea after all. I cannot tell. All I know is that if we actually do see that there are and will continue to be optimization gaps in the JS engines themselves, then there is a way to fill them, and likely without having to switch to basically entirely different (frontend) technology stacks.
> By going through a number of engine-specific / implementation-specific code transformations they have achieved a significant performance boost for hot code, which, for whatever reason, the JS engines themselves failed to attain with the optimization repertoire they already have.
I think you're mischaracterizing what happened a little. Most of the author's improvements weren't engine- (or even JS-) specific, they were algorithmic improvements. But for the first two that were engine-specific, it's not like he applied a rote transformation that always speeds up scripts when you apply it. Rather, the author (himself a former V8 engineer) saw from the profile results that certain kinds of engine optimizations weren't being done, and rewrote the code in such a way that he knew those specific optimizations would take place as intended. Sure, a deep ML preprocessor might do the same - but only after trying 80K other things that had no effect, and on code that wasn't even hot, no?
More to the point though, it strikes me that you say JS engines aren't all-powerful, but in the same breath you seem to assume that just because V8 didn't optimize the code in question that it can't. It seems very likely to me that any case you can find where a preprocessor improves performance is a case where there's a fixable engine optimization bug. Sure, in principle one could build a preprocessor for such cases, but it seems more useful to just report the engine bugs.
Also, remember that JS engines are not all-powerful and all-knowing in their optimization techniques, it's still just a limited number of individually imperfect humans working on them, just like the rest of us. So naturally there are going to be opportunities for other humans to utilize and help complete the picture and increase the overall value and effectiveness.
Maybe in this case there is room specifically for a JS-syntax-level tool that also has more freedom in terms of execution time and related concerns, because it can execute at build-time, potentially pull in or bundle much more information about optimizations with it (imagine using (potentially expensive) ML model to search for probable performance wins, actually do micro- or macro-benchmarks during the build step), be maintained outside of the implementation of a JS engine and thus have the additional benefits of a potential broader contributor base and a faster and more focused release cycle, etc. Or this may not be a good idea after all. I cannot tell. All I know is that if we actually do see that there are and will continue to be optimization gaps in the JS engines themselves, then there is a way to fill them, and likely without having to switch to basically entirely different (frontend) technology stacks.