I'm working with Z3 now, and it's been a real joy. One of the biggest advances in programming languages in recent years, perhaps. Glad to see it open-sourced and moved to Github.
Long day at work, didn't see the thread that grew out of this comment. What do I use Z3 for? Right now, I'm implementing a web browser in Z3 so that Z3 can solve CSS layout queries. Like, "write me a CSS file given this website mockup". But honestly, it's just damn fun to play with, and especially for solving small logic puzzles.
For example, a friend and I were playing a board game where each player had hit points, which they represented with two types of cards: type A had "5" on one side and "1" on the other, and type B had a "20" on one side and a "10" on the other. You'd represent your current hitpoints by displaying some number of these cards (out of 6 of the first type and 3 of the second type) face up, and adding up the totals. So, I wanted to find out: what's the smallest hit point value that I can't represent with some combination of cards? The traditional mathematical approach would be generating functions, but that would require at least pencil and paper. With Z3, I execute:
If you run this for different values of i (using push and pop if you're clever; I just called Z3 100 times), you can get a "sat" or "unsat" which tells you whether a particular total is reachable.
I am not the OP, but I'm using Z3 in my research to help with the formal verification of hardware and firmware.
My problem is as follows. I have a microcontroller which runs firmware. I want to verify various things about the hardware+firmware combination. For instance, I may be interested in proving that user mode firmware cannot change the MMU configuration.
There are many parts to this problem. First is verifying that the kernel doesn't expose routines that let you do these kinds of things. This is a kind of s/w verification problem, somewhat well studied. Second, you want to make sure there are no h/w bugs in the microcontroller itself which the user mode software can exploit to do bad stuff. This is traditional h/w verification, also well-studied. The third is that there aren't weird corners of the microcontroller ISA specification itself which can be exploited. This is also a kind of h/w bug albeit a specification bug.
The third part is where Z3 comes in because often, there isn't any specification, and if there is, it's a bunch of PDFs or word documents. What we want is to generate a formal specification, which you can examine using formal tools to prove that it satisfies certain properties. And then we want to prove that our implementation conforms to this specification. We're using some techniques from program synthesis with Z3 as the solver for this.
Hi, I can't see an email address in your profile so sorry for the public message. With the lowRISC project http://lowrisc.org/ and related research at University of Cambridge we're becoming very interested in formal verification of hw+sw. I found this work out of Kaiserslautern interesting <http://www-eda.eit.uni-kl.de/eis/research/publications/paper.... Could you say any more about your research? I'd be very interested in an email conversation - I'm at alex.bradbury@cl.cam.ac.uk
I'm not pavpanchekha, but I worked on a project[1] that used Z3 to optimize array-forth code without having to implement a classical optimizing compiler. We used Z3 to synthesize basic blocks of array-forth code against an executable specification which involves verifying that two programs have the same behavior and actually searching for programs against certain constraints.
There is a whole field of research in program synthesis, much of which can be done efficiently on theorem-provers like Z3. You can read these slides[2] for a good overview.
My experience with concolic testing has been that single path satisfiability queries tend to be quite cheap and it's actually a bit challenging to write the tooling around the solver in such a way that the solver would actually become the bottleneck. This is not to say that combinations of concolic testing with other testing methods is not an interesting research topic :)
Then rather use cmbc, which does exactly that. It translates C code into SMT rules, allows assertions on symbolic variables and solves them.
And note that for harder crypto problems Z3 is not the best, it's just the easiest to use.
SAT solvers and SMT solvers are greeat for the kind of problems where your first thought would be to use brute force or backtracking. They use some really clever implementation tricks and heuristics that are likely to beat any of your algorithms that you write from scratch.
One of the more famous places where these constraint solvers are used is to deal with dependency management in package systems. For example, zeroinstall uses a SAT solver to resolve dependency conflicts.
A sillier example is the time I used a sat solver to find the solutions for a flash puzzle game. The part I'm proudest of is that the sat solver could handle the highly symmetric cases that stumped the hand-written backtracking algorithms. And if I had used an SMT solver instead of SAT I would not have had to convert everything all the way doen to boolean formulas.
Would a SAT solver be useful for compiler technologies?
A compiler is basically solving the problem of: here is a sequence of things I want done; here is a description of what the machine can do; find a combination of the latter which achieves the former. The details are the tricky bit, particularly for unpleasantly asymmetric instruction sets like the old 8-bit architectures.
It would be so nice if I could just throw a description of the problem at some sort of machine reasoning system and have it Just Work...
Yes, although it's hard to make it scale in every case. What you're talking about sounds like the field of program synthesis[1], which uses SMT solvers (among other technologies) to generate code to some specification. I worked on a system[2] that used this sort of technique to optimize code for a weird architecture (GreenArrays' Forth-based chip[3]) and it was certainly much easier to implement than a classical optimizing compiler.
Perhaps the most relevant example project here would be Rosette[4], which is basically a framework for easily writing synthesizers for different tasks. It would let you implement a description of your machine in Scheme (ie write a simple interpreter) and automatically generate a synthesizer for it that could compile programs in your language to formulas in a solver of some sort.
Hey, GreenArrays! I did code generation for the ShBoom / PSC1000 once --- heavily bending a traditional register-centric compiler architecture to produce terrible, terrible code. Fascinating processor, though. And I have an MSP430 right here on my desk.
I'm afraid your paper is largely beyond me, though, but I'll go read up on program synthesis. Thanks!
I had a long day at work, and didn't see this. I posted another comment (couldn't edit; too late). You may be sure that I am MS shill, cleverly astroturfing their now-free product.