When I was prototyping a programming language with support for contracts (or refined types, e.g. `x : int if x > 0`), I tested a few different SMT solvers, including AltErgo, CVC and Yices (these seemed to be the most used open source solvers).
However, none of them were nearly as useful as Z3. In particular, they had no concept of a stack that allows the user to define different assertions locally, and then cancel them later. Without the stack, one would need to restart the solver and re-declare all variables (with all their constraints) for every function call with a constraint, compared to a single SMT script for each function verified in Z3.
Also, Z3 supports much more theories, in particular it supports non-linear integer arithmetics (which is an undecidable, incomplete theory), either by using some heuristics, or by translating the statements into the theory of real arithmetics (which is complete and decidable).
Also take a look at the "licensing" section to see why switching to the MIT license is a big deal. Even open source software projects like LLVM have had to worry about whether using Z3 was OK or not. Now they don't any more!
I'm working with Z3 now, and it's been a real joy. One of the biggest advances in programming languages in recent years, perhaps. Glad to see it open-sourced and moved to Github.
Long day at work, didn't see the thread that grew out of this comment. What do I use Z3 for? Right now, I'm implementing a web browser in Z3 so that Z3 can solve CSS layout queries. Like, "write me a CSS file given this website mockup". But honestly, it's just damn fun to play with, and especially for solving small logic puzzles.
For example, a friend and I were playing a board game where each player had hit points, which they represented with two types of cards: type A had "5" on one side and "1" on the other, and type B had a "20" on one side and a "10" on the other. You'd represent your current hitpoints by displaying some number of these cards (out of 6 of the first type and 3 of the second type) face up, and adding up the totals. So, I wanted to find out: what's the smallest hit point value that I can't represent with some combination of cards? The traditional mathematical approach would be generating functions, but that would require at least pencil and paper. With Z3, I execute:
If you run this for different values of i (using push and pop if you're clever; I just called Z3 100 times), you can get a "sat" or "unsat" which tells you whether a particular total is reachable.
I am not the OP, but I'm using Z3 in my research to help with the formal verification of hardware and firmware.
My problem is as follows. I have a microcontroller which runs firmware. I want to verify various things about the hardware+firmware combination. For instance, I may be interested in proving that user mode firmware cannot change the MMU configuration.
There are many parts to this problem. First is verifying that the kernel doesn't expose routines that let you do these kinds of things. This is a kind of s/w verification problem, somewhat well studied. Second, you want to make sure there are no h/w bugs in the microcontroller itself which the user mode software can exploit to do bad stuff. This is traditional h/w verification, also well-studied. The third is that there aren't weird corners of the microcontroller ISA specification itself which can be exploited. This is also a kind of h/w bug albeit a specification bug.
The third part is where Z3 comes in because often, there isn't any specification, and if there is, it's a bunch of PDFs or word documents. What we want is to generate a formal specification, which you can examine using formal tools to prove that it satisfies certain properties. And then we want to prove that our implementation conforms to this specification. We're using some techniques from program synthesis with Z3 as the solver for this.
Hi, I can't see an email address in your profile so sorry for the public message. With the lowRISC project http://lowrisc.org/ and related research at University of Cambridge we're becoming very interested in formal verification of hw+sw. I found this work out of Kaiserslautern interesting <http://www-eda.eit.uni-kl.de/eis/research/publications/paper.... Could you say any more about your research? I'd be very interested in an email conversation - I'm at alex.bradbury@cl.cam.ac.uk
I'm not pavpanchekha, but I worked on a project[1] that used Z3 to optimize array-forth code without having to implement a classical optimizing compiler. We used Z3 to synthesize basic blocks of array-forth code against an executable specification which involves verifying that two programs have the same behavior and actually searching for programs against certain constraints.
There is a whole field of research in program synthesis, much of which can be done efficiently on theorem-provers like Z3. You can read these slides[2] for a good overview.
My experience with concolic testing has been that single path satisfiability queries tend to be quite cheap and it's actually a bit challenging to write the tooling around the solver in such a way that the solver would actually become the bottleneck. This is not to say that combinations of concolic testing with other testing methods is not an interesting research topic :)
Then rather use cmbc, which does exactly that. It translates C code into SMT rules, allows assertions on symbolic variables and solves them.
And note that for harder crypto problems Z3 is not the best, it's just the easiest to use.
SAT solvers and SMT solvers are greeat for the kind of problems where your first thought would be to use brute force or backtracking. They use some really clever implementation tricks and heuristics that are likely to beat any of your algorithms that you write from scratch.
One of the more famous places where these constraint solvers are used is to deal with dependency management in package systems. For example, zeroinstall uses a SAT solver to resolve dependency conflicts.
A sillier example is the time I used a sat solver to find the solutions for a flash puzzle game. The part I'm proudest of is that the sat solver could handle the highly symmetric cases that stumped the hand-written backtracking algorithms. And if I had used an SMT solver instead of SAT I would not have had to convert everything all the way doen to boolean formulas.
Would a SAT solver be useful for compiler technologies?
A compiler is basically solving the problem of: here is a sequence of things I want done; here is a description of what the machine can do; find a combination of the latter which achieves the former. The details are the tricky bit, particularly for unpleasantly asymmetric instruction sets like the old 8-bit architectures.
It would be so nice if I could just throw a description of the problem at some sort of machine reasoning system and have it Just Work...
Yes, although it's hard to make it scale in every case. What you're talking about sounds like the field of program synthesis[1], which uses SMT solvers (among other technologies) to generate code to some specification. I worked on a system[2] that used this sort of technique to optimize code for a weird architecture (GreenArrays' Forth-based chip[3]) and it was certainly much easier to implement than a classical optimizing compiler.
Perhaps the most relevant example project here would be Rosette[4], which is basically a framework for easily writing synthesizers for different tasks. It would let you implement a description of your machine in Scheme (ie write a simple interpreter) and automatically generate a synthesizer for it that could compile programs in your language to formulas in a solver of some sort.
Hey, GreenArrays! I did code generation for the ShBoom / PSC1000 once --- heavily bending a traditional register-centric compiler architecture to produce terrible, terrible code. Fascinating processor, though. And I have an MSP430 right here on my desk.
I'm afraid your paper is largely beyond me, though, but I'll go read up on program synthesis. Thanks!
I had a long day at work, and didn't see this. I posted another comment (couldn't edit; too late). You may be sure that I am MS shill, cleverly astroturfing their now-free product.
I suspect that they've sold a fair few licenses in their time. SMT solvers are widely used in industry, and to my knowledge Z3 is one of the better ones.
It's probably just that the volume of sales isn't going to make sense when comparing to the size of the team.
I hope they will open source Microsoft.Automata too, it was very useful for a project where I had to generate samples of strings that had to conform to multiple regex :
z3 is something I didn't know about until encountering it here. Spent the last few hours playing with the solver, it's quite interesting especially since I've had a fair amount of exposure to Scheme. Felt right at home after getting it up and running.
Glitch-free compiling and installing under FreeBSD. The only problem was finding documentation. Rise4fun.com is mentioned in another post, the exact URL for getting started is http://rise4fun.com/Z3/tutorial/guide. Also, very useful information here-- http://www.smt-lib.org/
Could please someone informed explain how is this more restricted to only prove theorems, instead of being a more general prolog-like system?
From the Microsoft Research page, I read:
> Z3 is a high-performance theorem prover being developed at Microsoft Research. Z3 supports linear real and integer arithmetic, fixed-size bit-vectors, extensional arrays, uninterpreted functions, and quantifiers. Z3 is integrated with a number of program analysis, testing, and verification tools from Microsoft Research.
And with my current knowledge (which is low on the matter) I can't reconcile arithmetic, arrays, quantifiers, etc. with general program verification...
Z3 is an SMT solver (https://en.wikipedia.org/wiki/Satisfiability_modulo_theories). You can think of it like a SAT solver with extra optimizations for certain domains, so that it doesn't need to do bit-blasting to e.g. compare integers. SMT is NP-complete, just like SAT, but it is decidable (so the solver will "eventually" give you a yes/no answer, though "eventually" could be a long time).
This is in pretty stark contrast to Prolog, which includes features which make it undecidable. Therefore you can make Prolog loop infinitely, while that is not possible in an SMT solver.
Fwiw some of the later logic-programming work coming out of Prolog has produced languages with Prolog-like syntax but without statement order being significant, and guaranteed termination. This paradigm, Answer Set Programming (ASP), is also based on solvers, so has some relationship to SMT, but feels more like Prolog to program, e.g. it has negation-as-failure. My favorite implementation is from the University of Potsdam [1].
Imo it's a much nicer modeling language, especially for incremental formalization, in part because of implementing a nonmonotonic rather than classical logic. It also has a nice generative-programming style where you can define generative spaces via "choice rules", where the solver is explicitly given a free choice for some variables, subject to constraints, and returns a satisfying artifact/model. That allows using it to construct constrained generative systems (e.g. generative music [2], level generation [3]). But Z3 far outperforms ASP solvers on problems where variables range over large domains (like "32-bit integer") because of the MT part of SMT. There's some experimental work on trying to combine the approaches, using Z3 (or another SMT solver) as the solver backend for a logic-programming system [4], but I don't know how usable it is.
I'm not sure I'd set up an opposition between SAT solvers like Z3 on
one hand, and ASP on the other. I recommend to think of ASP (and other
approaches like CSP (constraint satisfaction problems)) as high-level
languages for solving combinatorial problems.
In this understanding,
high-level (ASP, CSP) contrasts with low-level (SAT and SMT) for
the same problem domain: just as with high- and low-level
programming languages like Java are compiled down to low-level programming languages like x86 machine code, high-level formalism like ASP, CSP, ... can be compiled down to low-level SAT/SMT.
SAT/SMT is the assembly
language of combinatorial problems because you have to state
explicitly all forbidden solutions using CNF (conjuncive normal
form). For examle if you have variables x, y and z, then the CNF
( x \/ y \/ not z ) /\ ( not x /\ not y )
has as models all maps from {x, y, z} to {0, 1} except {x=0, y=0,
z=1}, {x=1, y=1, z=1} and {x=1, y=1, z=0}. The problem with using CNFs to enumerate forbidden solutions is that the formulae can be long (just like assembler programs can be long). With high-level formalisms like ASP you often get a much more succinct specification.
With a
bit of squinting, you can even see first-order logic as a high-level
formalism that compiles down to propositional logic (via skolemisation,
resolution, unification, herbrand etc).
In some theoretical sense I agree with that view, but I don't think we're currently at that stage from a practical perspective. Today, the established ASP solvers have performance and expressivity tradeoffs relative to the established SMT solvers. There are some early experiments (like the one I linked) that aim to do things like use SMT solvers as an ASP backend, but they are very much research stage. And they are also (so far) a bit "messy", requiring source-level changes in order to take advantage of Z3, rather than being able to transparently "compile" general ASP programs to take advantage of efficient solvers.
Uhm you can throw diophantine equations after Z3 which certainly isn't a decideable problem. I have no idea whether it errs on the side of termination (i.e. not giving an answer) or non-termination (i.e. running forever).
It might be better to see Ze and other SMT solvers as souped up SAT solvers instead of comparing them to general theorem provers.
The neat thing about Z3 is that it has efficient solvers for the more specific subproblems (linear programming, bit vectors, etc) that you can't have if you state everything using "general" constraint programming. At the same time, Z3 lets you work at a higher level of abstraction - for example, you can reason about bit verctors or arithmetic directly instead of having to encode it with boolean formulas.
Z3 has a core engine optimised to solve certain types of problems. To use Z3 for higher-level problems, such as program verification, you first translate your higher-level problems into problems that Z3 understands. Z3 might not be suitable for all problems, but it is meant to be highly optimised for the problems that it can handle.
Coq already could, if they wanted to. Z3 supports the SMT-Lib format, which means that any theorem prover that can both output that and execute a shell command can invoke Z3. This has no bearing on Z3's specific license, unless you want to use that tool for commercial purposes. For an example of this being used in a similar tool to Coq, Isabelle/HOL's 'Sledgehammer' tactic will attempt to use Z3 (and then reconstruct a proof) where its own solvers fail.
One of the main devs of Z3, Leonardo de Moura, is acutally working on a new, dependently typed, theorem prover called "Lean": http://leanprover.github.io/
Is Lean an LCF system or Curry-Howard based? It seems to be the latter, but if so, what's it's main advantage over the many other Curry-Howard based systems?
Idris and Coq are based on the Curry-Howard correspondence. That makes it difficult / impossible to integrate Z3, because Z3 in general does not output concrete proofs. More precisely, some Z3 modules can output proof objects, but not all. There is work on integrating Z3 with LCF-style provers like Isabelle [1]. There is also active work on Z3 integration with Coq, although I'm not sure how this works. See [2] for a brief discussion of these issues by one of the Z3 authors.
Models are not proofs (in fact they're the opposite). In SAT/SMT, a proof is a sequence (a tree really) of deductions that show that you can conclude "false" from the set of initial assumptions, and that, therefore, there is no model.
I'm not sure I understand what you are saying here.
The tree of resolution deductions that SAT provers (modulo theories or
otherwise) like Z3 produce is of course a proof in a valid proof
system. Typically that is some variant of a propositional resolution
proof system. The problem is that the reasoning is classical: to show
A, falsity is deduced from (not A). This is not constructively valid,
you can only deduce (not not A) if (not A) implies falsity.
So the resolution proof that Z3 outputs can be converted to a
classical proof and expressed in classical proof systems like
(Isabelle/)HOL, but not in constructive systems like Coq and Agda.
Either I misread the parent's question or (s)he edited it. I had read it as "[...] cases where Z3 will say sat, but not output a proof when you ask for it", and was simply trying to clear up the misunderstanding that a 'sat' answer should come with a proof.
I heard Microsoft uses provers to check (automated verification) third party kernel mode device drivers. Is that the Z3 proofer? I searched on Google and got several results but the meaning of the term "kernel" is overloaded and means different things in operating systems and proofers/solvers.
When people talk about kernels it has more to do with proof verification. Once you have written down a proof for something you need a way to verify is the proof is correct, checking for syntax errors, checking if each step is a valid inference step, etc. For the more advanced theorem provers what they tend to try to do is use a comlpex system to let the programmer build a proof thats easy to describe but in the end convert it to a longer proof that a simpler verifier can check.
As for Z3, its strength is about contraint solving. You can specify a set of constraints and it will try to find if there is a viable solution that satisfies the constraints. It includes a SAT (boolean satisfiability)solver and beefs it up with support for some higher level things, like integer arithmetic, bit vectors, etc. This kind of constraint solver is very useful for finding bugs: for example, the notorious intel floating point bug was found out by asking a SAT solver to find an input where the processor's circuit produced the wrong value.
When I was prototyping a programming language with support for contracts (or refined types, e.g. `x : int if x > 0`), I tested a few different SMT solvers, including AltErgo, CVC and Yices (these seemed to be the most used open source solvers).
However, none of them were nearly as useful as Z3. In particular, they had no concept of a stack that allows the user to define different assertions locally, and then cancel them later. Without the stack, one would need to restart the solver and re-declare all variables (with all their constraints) for every function call with a constraint, compared to a single SMT script for each function verified in Z3.
Also, Z3 supports much more theories, in particular it supports non-linear integer arithmetics (which is an undecidable, incomplete theory), either by using some heuristics, or by translating the statements into the theory of real arithmetics (which is complete and decidable).