There's a reason why a lot of programming language papers use formal math notation, because it's a lot more concise to express the entire typing rules of your language in a dozen or so inference rules rather than a more convoluted implementation.[0]
Formulog looks like it could potentially replace that math notation for something as rigorous yet executable.
On the other hand, denotational semantics are usually straightforward enough to be implemented by reading off the math. As an example, it was quite easy for me to translate Scheme's semantics to code.[1]
I hope it gets reimplemented in a functional language as well, Formulog's current implementation is in Java[2] which might not be ideal for reducing implementation bugs.
Any advice for learning to read the PL math notation? The PL papers I've seen normally take it for granted, so I don't even know what it's called to search for it!
If you're talking about the inference rules used to specify the type system of a programming language, one of Wikipedia's[0] references is[1] which from a quick skim seems adequate. The main takeaway is that the things above the line are the assumptions, and the things below the line is what you conclude. Γ is usually used for typing environments (a map from variables to their types), the turnstile ⊢ can be read as "from the context on the left, it entails the typing information on the right". As a quick example:
Γ ⊢ e1 : Int Γ ⊢ e2 : Int
----------------------------
Γ ⊢ e1 + e2 : Int
Which reads, if e1 has type Int in Γ, and e2 has type Int in Γ, then e1 + e2 also has type Int in Γ. These rules are really bidirectional which is what allows you to perform typechecking. If you have an expression (a + b) of type Int, you can reduce it into two subproblems, typechecking a expecting an Int, and similarly for b.
It was a great refresher as someone who once liked math but hasn't done much of it in ~20 years :) I had seen the blog posts, but there was some "color" in the videos that helped. For example I didn't realize that the fonts sometimes matter! Honestly, I still don't really read the notation, as I haven't had a strong reason to, but I feel it would be useful at some point.
----
For others, I also recommend this 2017 talk by Guy Steele It's Time for a New Old Language
Because even people in the field seem to have problems with the notation. He also was asked about this work a few days ago here and said he was still working on it in the background (being a "completionist"):
FWIW as you know, Oil is more static than shell, and that was largely motivated by tools and static analysis (and negatively motivated by false positives in ShellCheck https://news.ycombinator.com/item?id=22213155)
I would like to go further in that direction, but getting the basic functionality and performance up to par has taken up essentially 100% of the time so far :-(
My use of Zephyr ASDL was also partly motivated by some vague desire to get the AST into OCaml. However I haven't used OCaml in quite awhile and I get hung up on small things like writing a serializer and deserializer. I don't want to do it for every type/schema, so it requires some kind of reflection. And my understanding is that there are a bunch of packages that do this like sexplib, but I never got further than that.
Formulog sounds very nice, so I wonder if there is some recommended way of bridging the gap? For example imagine you want to load enormous Clang AST or TypeScript ASTs into Formulog. The parsers alone are 10K-30K lines of code, i.e. it's essentially infeasible to reproduce those parsers in another language in a reasonable time. And even just duplicating the schema is a pretty big engineering issue, since there are so many node types! I could generate them from Zephyr ASDL, but other projects can't. I wonder if you have any thoughts on that? i.e. to make the work more accessible on codebases "in the wild"
-----
Also FWIW I mentioned this "microgrammars" work a few days ago because I'm always looking for ways to make things less work in practice :)
Thanks! :) We should be very clear that the bulk of the work is Aaron Bembenek's.
I think Formulog would work great for analyzing the shell---as would any other Datalog, though SMT-based string reasoning will certainly come in handy. I don't think it will help you with parsing issues, though. The general approach to static analysis with Datalog avoids parsing in Datalog itself, relying on an EDB ("extensional database"---think of it as 'ground facts' about the world, which your program generalizes) to tell you things about the program. See, e.g., https://github.com/plast-lab/cclyzer/tree/master/tools/fact-... for an example of a program for generating EDB facts from LLVM. Just like real-world parsers, these are complicated artifacts.
Ah OK thanks for the link. Since it depends on commercial software, I don't see a path to trying it (which is fine, because I probably don't have time anyway :-/ )
So are you saying that it's more conventional to serialize relations from C++ or Python, rather than serialize an AST as I was suggesting?
Your blog post mentions ASTs too, so I'm not quite clear on that point. I don't have much experience writing such analyzers, and I'd be interested if there is any wisdom / examples on serializing ASTs vs. relations, and if the relations are at the "same level" as the AST, or a higher level of abstraction, etc.
-----
FWIW I read a bunch of the papers by Yannis because I'm interested in experiences of using high level languages in production:
I did get hung up on writing simple pure functions in Prolog. There seems to be a debate over whether unification "deserves" its own first-class language, or whether it should be a library in a bigger language, and after that experience, I would lean toward the latter. I didn't really see the light in Prolog. Error messages were a problem -- for the user of the program, and for the developer of the program (me).
So while I haven't looked at Formulog yet, it definitely seems like a good idea to marry some "normal" programming conveniences with Datalog!
I'd say it's conventional to reuse an existing parser to generate facts.
The AST point is a subtle one. Classic Datalog (the thing that characterizes PTIME computation) doesn't have "constructors" like the ADTs (algebraic data types) we use in Formulog to define ASTs. Datalog doesn't even have records, like Soufflé. So instead you'll get facts like:
I'm not sure if that's you mean by serializing relations. But having ASTs in your language is a boon: rather than having dozens of EDB relations to store information about your program, you can just say what it is:
As for your point about Prolog, it's a tricky thing: the interface between tools like compilers and the analyses they run is interesting, but not necessarily interesting enough to publish about. So folks just... don't work on that part, as far as I can tell. But I'm very curious about how to have an efficient EDB, what it looks like to send queries to an engine, and other modes of computation that might relax monotonicity (e.g., making multiple queries to a Datalog solver, where facts might start out true in one "round" of computation and then become false in a later "round"). Query-based compilers (e.g., https://ollef.github.io/blog/posts/query-based-compilers.htm...) could be a good place to connect the dots here, as could language servers.
And if you're fine with something much longer than others have suggested, Types And Programming Languages [0] covers the notation, in addition to most of the other knowledge most PL papers take for granted.
Roughly: the way facts work in Datalog and similar stuff have pretty strong monotonicity properties. So if you want to allow adding more predicates and computations to the system, you kinda wanna only allow monotonic functions! This has some pretty interesting surprises when it comes to how to model things like true or false!
Datafun is quite cool! Formulog and Datafun seem similar---both combine logic programming and pure functional programming---but they take wildly different approaches.
Datafun is a foundational re-imagining of what a Datalog could look like: start with a higher-order programming language and give it a first-class notion of least fixed points. A type system for tracking monotonicity lets you roll your own Datalog. It's impressive that you can reconstruct semi-naive evaluation (morally: in each 'round' of evaluation, only apply rules matching new results) in their setting (https://dl.acm.org/doi/abs/10.1145/3371090). Datafun is still a ways away from the performance and implementation maturity of existing Datalogs, though.
Formulog's approach is to try to let Datalog be Datalog as much as possible. We end up with restrictions around higher-order functions and other FP features in order to keep things simple on the Datalog side---quite the opposite of Datafun's fancy type system. Our Formulog interpreter does pretty well with internment, parallel execution, and magic sets, but you could easily port our design to existing Datalog compilers and get even bigger speedups. It's not clear how to do that for Datafun... yet.
(I suspect you could port our SMT interface to Datafun without too much of a problem, too.)
I would have thought the SMT queries would be the most time-consuming part of this, but the authors make a big deal of leveraging Datalog optimzations to drive performance.
Especially given they purposefully don't re-use the SMT context across SMT terms.
Aren't the big SMT solvers already doing a bunch of optimization to allow incremental (push/pop) queries to be fast?
Super interesting, and cool technique. Do you have any insight into why CSA outperforms PP so often? I would have assumed the solvers were tuned for PP
I think the solvers _are_ tuned for PP. But we're comparing CSA and PP on the queries that Formulog issues... which don't really match well with the DFS discipline that the PP stack aligns with. I think CSA beats PP in our experiments because CSA is more flexible about locality.
Broadly---and I haven't looked at the memory usage to confirm this---I think CSA trades space (cache more old formulae, not just the prefix of some stack) for time (look, our answers are in the cache!).
Poplog was a 1980's integration of a Prolog, Common Lisp, C-like POP-11, and SML. Regrettably, it's academic authors had commercial dreams, which seemingly neutered the transformative impact I thought it might then have had.
It is also integesting to see how typeclasses/traits and OCaml's (still upcoming?) "modular implicits" are related to logic programming.
I commend the rustc team for taking this lineage seriously in developing Chalk[1] and using differential-dataflow for the next borrow checker. If they combine those into a differential-dataflow-powered Chalk it will be very formidable!
We'll have to catch up Haskell at some point, sigh.
Formulog looks like it could potentially replace that math notation for something as rigorous yet executable.
On the other hand, denotational semantics are usually straightforward enough to be implemented by reading off the math. As an example, it was quite easy for me to translate Scheme's semantics to code.[1]
I hope it gets reimplemented in a functional language as well, Formulog's current implementation is in Java[2] which might not be ideal for reducing implementation bugs.
[0] Contrast the code and math: https://www.andres-loeh.de/LambdaPi/LambdaPi.pdf
[1] https://github.com/siraben/r5rs-denot
[2] https://github.com/HarvardPL/formulog