Hacker News new | past | comments | ask | show | jobs | submit login
Building a Lox Interpreter in Julia (lukemerrick.com)
74 points by lukemerrick on June 3, 2023 | hide | past | favorite | 24 comments



Disclaimer: I wrote this blog post. If this were an "Ask HN" post, though, the question would be "What next after reading Crafting Interpreters?" I have only done the tree-walk interpreter half of the book, but I'm already excited to move beyond Lox, and I'm curious to hear what others have done in this situation.


I'd do the second half of the book, it's got plenty of important techniques that aren't taught in the first half.

Once you've done that, you can see how it's done "for real" in a production setting by reading the source code to Wren (by the same author). [0]

Wren's implementation maps very closely to the C implementation of Lox.

At that point, you're ready to do your own language. As far as compilation techniques go, you'll still be missing the "middle-end" of the compiler, which uses an SSA IR. I don't recommend implementing this yourself, I'd look into MLIR (from the LLVM project) if you want to actually work on the middle-end. You can create one or more dialects that are unique to your language, and implement your own compiler transformations. There are lots of existing papers and projects on GitHub demonstrating doing so.

[0] https://wren.io/


I agree on the suggestion to do part two, it's where things get really fun!

One thing you can do with the finished Lox (or Monkey, if you prefer WACIG) before going into the world of intermediate representations is implementing a peephole optimizer. You look for reducible patterns in the bytecode, and replace them with optimized bytecode. You can also look for certain patterns and replace naive implementations with native builtins/intrinsics. You can work with the raw bytes of the bytecode, so you don't need to introduce an IR just yet.

The Apex compiler at Salesforce does a vast majority of its optimizations as peeps.

EDIT: I _just_ wrote another comment to someone asking similar questions a few days ago. Here's a link to the parent question, check out my thoughts as well as others in the thread. https://news.ycombinator.com/item?id=36119915


Wow, the whole concept of a peephole optimizer is a bit mind blowing to me. I'm appreciating all the reasons to power through to writing a bytecode VM as the next step.

I'm not sure how far down the compiler I actually will enjoy going vs. exploring ideas around type systems, linters, etc. up near the AST level, but if I do venture down this advice will certainly come in handy!


Do you have any pointers to papers/books exploring peephole optimizers? I've read a few paper, but found them underwhelming…


I don't have an academic background, and I'll agree that the majority of books I have picked up have covered the subject very well. It's a pretty common technique though, for assembly and for bytecode, so I've learned by reading implementations. `peephole.c` in CPython is particularly easy to read with a small understanding of the CPython API. It's a very limited implementation, but the idea goes far. Lots of things that you might expect the compiler to optimize in IR can be done directly from the bytecode instead.


Thank you for the suggestions! I was actually just searching about MLIR today after reading some Julia language community discussions on the new Mojo language that uses MLIR.


Personally, I found "Writing an Interpreter in Go" better than "Crafting Interpreters". It has a follow-up book titled "Writing a Compiler in Go", which (similar to Crafting Interpreter's second half) implements a virtual machine. The books take a lot of inspiration from Crafting Interpreters.

If this sounds at all appealing to you, I would check it out. I found Monkey so far somewhat more compelling than Lox (might be personal preference, though), and I thought the exposition (and code quality) was overall better. Go is also very easy to pick up.


Thank you for the tip! Getting to implement a different language from Lox feels like a nice way to cut down on the tedium.


I continued by taking a compilers and a interpreters course at my university. In the interpreters course a friend and I desinged a new language and implemented it. It turns out designing a language is pretty hard, but it was quite fun and am proud of the project. Also I really wanted to implement a typechecker, so we did that too.

https://github.com/flofriday/Moose


You can try and build your own toy language for fun, or try to compile lox to some assembly/llvm ir.

Another option is to try and contribute to an existing language (Julia ;) ).


I really wish Bob would write a “Crafting Type Checkers” sequel.


There's Types and Programming Languages by Benjamin Pierce


I am wondering why the julia lox interpreter is so slow. Could it be that there is a lot of type unstable code or could it be an issue with julia's garbage collector? (I can relate to the comment that it is quite hard to find usable information from julia's profiler).


I'm pretty sure it is type instability. The faster way to do this would be with something like https://github.com/YingboMa/Unityper.jl which would fix that. The problem with profiling unstable code in Julia is that it makes everything slow so the profiler will just show a big mess of everything being slow. We do need better resources, but I have no idea what they would like like.


I actually played with Unityper.jl and SumTypes.jl, but my conclusion was that if I was going to depart from dispatch on Julia types in my code, I might as well just stick to an untyped tree, since either way I'd have to have a single `evaluate` function for interpreting any kind of node.

Reconsidering now, it seems that there might be benefits beyond type dispatch to having a typed syntax tree, so maybe I'll give that a shot as a next step!


What do you mean by "We do need better resources"?


There currently don't exist great tools to figure out why type unstable code is slow. @code_warntype and Cthulhu can help find type stability problems, but they're pretty hard to use for newer users (or big functions)


I just finished the bytecode interpreter side, and I have similar questions. The next areas of interest for me are reasonable AST representations in C and learning how to translate a semantics from a PL paper to a runtime. Anyone have any recommendations?



a fast lisp interpreter/compiler?


I thought working with the lossless syntax tree was the key to Rust's pin-point precise error messages. How does rust-analyzer get and print the specific point of error if it throws away the non-essential information before that stage? I suppose the position information is stored along with the Error nodes, in the original parsing?


In general, if you keep the source code positions of every nontrivial token, and you keep the raw source code, then yeah you can print out those pretty specific point error messages regardless of whether you keep your trees lossless. Also, if you want to include filename in your messages (perhaps because unlike Lox your language supports imports), then you'll need more than just lossless trees to store the necessary information.

I'm not sure exactly how Rust and rust-analyzer keep track of the info necessary to their excellent error messages and diagnostics, but I wouldn't be surprised if pinpoint messages were not the primary motivation for rust-analyzer to do lossless parsing.


Now we need a cream cheese interpreter and we'll be all set.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: