I prototyped a system to orchestrate maintenance on fleets of devices. The idea was that, rather than telling the system how to do it (e.g. workflows to roll out an update), you'd tell the system what you wanted (e.g. up to date machines), what actions were available (e.g. pull a machine from rotation, apply an update), and what constraints to obey (e.g. X of Y machines must be online, don't work in more than two regions simultaneously.)
I modeled a few scenarios like that in Picat and had it generate optimal plans. It worked swimmingly for pet problems, but predictably fell over scaling to cattle sizes. Planning is EXPTIME after all (e.g. Towers of Hanoi).*
Picat does have an escape hatch - you can define heuristics - so I built a random forest of state predicates and trained a naive Bayes classifier to predict fruitful paths. But even with that, and symmetry breaking constraints, and even some hierarchical planning, I couldn't make it work without too much handholding.
It's still AI winter for the classic GOFAI problem domains, apparently. :/
* maybe not, actually, if you reformulate the planning problem as returning a polynomial-time generator of a potentially exponentially long plan
There are plenty of commercial solvers out there that beat the pants off the open source options in terms of performance and in terms of depleting one's wallet :)
CPLEX, Xpress, GUROBI, and Hexaly all come to mind. Hexaly is really good for scheduling problems and things like vehicle routing. You typically access these via an API they offer you for the popular industry languages. This approach seems to make a lot more sense to me than having a dedicated solver language that isn't as good for all the general purpose stuff. Calling GUROBI from Python is a breeze as is all the standard stuff in Python. Mosek is a lot cheaper than GUROBI, but both of it's APIs are extremely low level and the performance isn't as good as GUROBI either.
Yup. & when you get a model that works matched up with an industrial scale decision problem that's valuable to solve, arguably it's only of academic interest if you can solve it "optimally". The problem is only a simplified model of reality anyway -- it's often better to get a quick close-enough approximate solution to a problem that's a good approximation of the situation than an exact optimal solution to a simpler problem that's a poorer approximation.
If you're lucky enough to get a problem that's basically stable over time, where the problem structure doesn't change, then maybe you can get improved solutions rapidly at industrial scale replacing use of a black-box MIP solver like Gurobi/CPLEX with a decomposition that exploits the problem structure, where sub-problems can be solved by some specialized graph algorithm or heuristic or brute force (if they all have bounded size), and the general purpose MIP/LP solver can be left with the job of figuring out how to deal with the shared resources and constraints that bind the subproblems together. The downside to a highly specialised custom solver is that it usually isn't flexible to changing requirements (unless you get very lucky) -- a slight change in business rule can break the problem structure that underpins the entire solution approach.
I disagree with the first statement for some industries. My industry has several companies that have amongst the hardest MILP problems in the world that have millions of variables and constraints and need to be globally optimal for the sake of fairness and to save many millions of dollars per year. Electricity markets are extremely complicated and although it would be amazing to simplify things, the decisions and prices need to be correct. Also, performance is critical as these auctions are running 24/7. They're also always being changed as well, so black-box globally optimal solutions are a need to have.
> There are plenty of commercial solvers out there that beat the pants off the open source options in terms of performance and in terms of depleting one's wallet :)
While this is generally true, there are some exceptions. I recently compared the performance of CP-SAT vs CPLEX for a problem (linear constraints and objective). For large instances where proving optimality in a reasonable time was out of the question, CP-SAT had much faster convergence to near-optimal solutions than CPLEX when the time limit was small enough (~30s to a few minutes). This is with the CPLEX solver tuned towards improving the upper bound as much as possible (it was a minimization problem).
I think that you've misunderstood what I said. For large instances of this specific problem (and when the time limit is too short to allow either CP-SAT or CPLEX to prove optimality) the best integer feasible solution found by CP-SAT is generally of better quality (w.r.t. the objective value) than the best integer feasible solution found by CPLEX. Furthermore, in some cases, CP-SAT can offer a certificate of optimality faster than CPLEX.
Well these are generic solvers, meaning that they are tuned to perform on a variety of problems well, so you can easily find cases where you get unexpectedly low performance.
My advice is if you are on the clock, just use whatever works best out of the box.
Now if you plan to solve this problem thousands of times daily, then I would invest in writing custom callbacks in CPLEX to inject feasible solutions during the search since its heuristics are suffering in your problem case.
can't you just use the strong duality theorem to reframe an integral optimization problem as a system of integer inequalities? I thought you usually don't do that because the satisfaction problem is harder in practice.
No unfortunately strong duality does not hold for integer problems.
But your intuition is right, this is roughly how IP optimizers work. They relax the problem to a continuous one (aka convert the {0,1} to [0,1]), solve this super easy linear problem and try to find integral solutions close this linear optimum. If not, start branching on the integers and solve smaller and smaller linear problems until you prove optimality.
Gurobi is a MIP solver right, not a planner? I use Gurobi at work for a certain kind of bi-level programming and it's amazing, like literally ~500x faster than CBC. Picat's planner is more like a Prolog flavor of PDDL (e.g. fast-downward and its ilk.)
MIP, LP, Nonlinear for GUROBI. GUROBI and CPLEX can do planning of course if you formulate your problem in a certain way, but some solvers like Hexaly show large performance gains for things like vehicle routing vs GUROBI as they take a very different approach (don't use LP or MIP).
Are the commercial offerings you mentioned better than TimeFold? [0] (formerly known as OptaPlanner before the main developers forked it)
TimeFold's heuristics-based approach makes fast solutions to even highly-complex scenarios within the reach of anyone who can write Java or Python expressions that evaluate to true when constraints are satisfied.
Huh, didn't know Optaplanner was forked. Bit sad, you would have thought it was good fit for IBM to all the enterprisey business automation stuff. Does Optaplanner have anyone left working on IBM(/RH) side?
Anyways, I wish the best for Timefold team, hopefully they can find success independently. With all the money sloshing around for all sorts of dumb AI projects, I think Timefold definitely deserves a portion too.
Could you elaborate on how OptaPlanner/Timefold wasn't ergonomic?
I 'd love to understand this better, so we can improve it in Timefold Community.
To help deal with new problems, we're creating out-of-the-box quickstarts for common use cases:
https://github.com/TimefoldAI/timefold-quickstarts
As for the Community/Enterprise split - we want to continue the open source project, but we need to eat too :)
Basically, the documentation was set up so that I could come in and do a bunch of the example problems fairly easily, but the part where I model my own problem and execute/solve that was always underserved. Or it felt like that to me anyway. I don’t have a background in the theory, but I can easily see that it could solve my problems if only I could figure it out.
> As for the Community/Enterprise split - we want to continue the open source project, but we need to eat too :)
Yeah, I don’t begrudge you that. It’s just that when I see that multithreading is locked away I just know that the moment I start using the product I’ll find out my toy problems are enterprise scale. Sort of like murpheys law, but for software features.
We're still figuring out a good, clear, honest pricing model for Enterprise that works well across company sizes (small - big), industries (healthcare vs telecom vs ...) and usage frequency (strategic vs operational planning).
We're happy to send you our pricing table version 1 (use contact us form on Timefold website), but we need your feedback if it fits your business model, deployment model, usage model, etc to validate our pricing model.
Just fork the community edition and add your own implementation of "nearby selection" and "multithreaded solving"... in effect, you get the enterprise edition for free, you just have to pay your own developers. Believe it or not, there are software companies that would go this route. I worked for one... as a developer, my time was far less important than the penny pinching and the heavy bureaucracy required for procurement.
It would cost less to pay Timefold whatever price they quote.
A lot of these companies started out with listed pricing and then grew and got crazy expensive. One of the nice things about the Mosek solver is that the price is listed and it is reasonable. I just dislike their API and documentation as it's written for mathematicians and not business SMEs.
> their API and documentation as it's written for mathematicians and not business SMEs
I've never worked with any of these, but that's what I would expect as well. API documentation that describes how to solve each kind of mathematical optimization problem.
Do you have an example of an optimization solver with API documentation for business SMEs?
Yeah. GUROBI makes it really easy to just do something like
m.addConstr(x+y<=limit);
Where m is a model object, x & y are decision variables and limit is a constant.
The above is a no brainer for any business expert. Some solvers are a LOT more low level though and you have to think of the problem in matrix form and specify row/column stuff and it's a lot more involved and assumes significantly more knowledge of the user than the former.
If you can convert your problem into a convex one (which I believe is often possible if you’re clever about how you express it), that would seem to be a pretty good option, no?
CVXpy is a frontend that transforms problems into a form that different solvers can understand.
It does make it very easy to format problems in Python but you still need a solver like Gurobi on the backend. You can use a variety of solvers on the same problem though, which is nice:
Thanks so much for sharing these. I had no idea that there were commercial API offerings for this kind of stuff. Just never worked for a business that had this sort of need.
If any of these had reasonable pricing I’d be happy to pay, but if the first price you see is ‘contact us’ you can be certain it’s too much for hobby use…
For GUROBI at least, you can access a free version directly through Anaconda or Pip I think that is limited to a few months and can only solve smaller problems. It does give you a sense of its capabilities. It is also free for academics. It is extremely expensive for production use though.
It’s worth asking for free access. Make it clear that you are an individual (not a commercial users) looking to learn and that you wouldn’t be a burden on their network or support system.
It is possible that something like CP-SAT (https://developers.google.com/optimization/cp) would have scaled well in your case. This solver easily handles an absurd amount of variables and constraints, and has excellent heuristics built-in.
"Planning is EXPTIME after all (e.g. Towers of Hanoi).*"
The old planners would include meta-rules, or heuristics, that decided which rules to apply. That would cut the search space. Some split the problem into different representations with specialized, automated solvers. Jahob Analysis System and Cyc come to mind.
Far as for real-world use, the neatest design I remember from classic A.I. was the Procedural Reasoning System. I've always wanted to see a version of it rebuilt with modern methods supplementing its weaknesses. Just for kicks and to see what it could do.
Why don't high quality open-source solvers exist? The SOTA in other numerical computing is often open-source. I've always wondered why optimization was different.
I've used MIP, SAT and plenty of other solver technologies and they aren't perfect but I got them to work most of the time. I've never once in my life been able to get a CP solver to solve anything that's even close to a real world problem.
For those interested, HN user hakank (Hakan Kjellerstrand), who is a very active member of the constraint programming community (among others), has a ton of Picat resources and examples on his website: http://www.hakank.org/picat/
I'd -- as usual -- suggest Prolog. Its elegant, easier to understand and more mature. It comes with "batteries included" if you want constraint solving over finite domains.
Besides that MiniZinc is a phenomenal interface to a whole range of solvers specialised to various purposes which are more likely to get you where you want to go if you're not an expert. In the Prolog case -- for all its benefits -- the amount of "mechanical sympathy" required to make it perform well can become substantial very quickly.
Finally, once you've written something in Picat, have a think about how you'd write it in some other language. I think you'll find .. for these toy problems, its easy in other languages too. After all, its a handful of lines to write Dijkstra or A* in most functional programming languages, and defining a state space for a search algorithm is essentially all you're ever doing.
My first thought was, this looks like a type system, except you need to solve it yourself. In Typesecript, naively:
const main = <a extends any, b extends any, c extends any>([a_, b_, c_]: [a, b, c, a]) => {
type SomeTuple = [a, b, c, a];
const X: a = a_;
const Y: Exclude<SomeTuple, a> // = <put solved value here, get a type error if the value does not satisfy the constraints>
console.log(X, Y);
}
Except nothing solves this because a, b, and c can all be the same. After trying to express this correctly, I ended up with something that appears useable (but that still uses assertions and doesn't really express the type of Y correctly).
type Narrowable = string | number | bigint | boolean;
/*
Express the type of a value in a tuple that is not the type of the second parameter
For example:
- ValOfTupleExceptFor<[1,1,6,2,3], 1> -> 6
- ValOfTupleExceptFor<[1,1,6,2,3], 6> -> 1
*/
type ValOfTupleExceptFor<
Tup extends readonly Narrowable[],
Val extends Tup[number]
> = Tup extends [infer First, ...(infer Rest extends Narrowable[])]
? First extends Val
? Rest extends []
? never
: ValOfTupleExceptFor<Rest, Val>
: First
: never;
const NO_SOLUTION: unique symbol = Symbol('NO_SOLUTION')
const getValOfTupleExcluding = (tup: readonly Narrowable[], val: (typeof tup)[number]): ValOfTupleExceptFor<typeof tup, typeof val> => {
const [first, ...rest] = tup;
if (!first) {
return NO_SOLUTION as never;
}
if (first === val) {
return getValOfTupleExcluding(rest, val);
}
return first as ValOfTupleExceptFor<typeof tup, typeof val>;
}
const main = <a extends Narrowable, b extends Narrowable, c extends Narrowable>([a_, b_, c_]: [a, b, c, a]) => {
const someTuple = [a_, b_, c_, a_] as const;
const X: a = a_;
// This still resolves to type 'never'
const Y: ValOfTupleExceptFor<typeof someTuple, a> = getValOfTupleExcluding(someTuple, a_);
console.log(X, Y);
}
Which really highlights how powerful the 'planner' style program is in terms of simplicity and conciseness. I guess Typescript isn't even powerful enough to express this kind of constraint.
Nice to see GOAP being referenced again. It was the secret sauce that made F.E.A.R.'s enemies so fun. And Jeff Orkin's paper on how it works is very readable and entertaining.
I've actually been using Prolog professionally including some CLPFD, and I love it. I want it everywhere. Or more precisely i want a logical core with emphasis on purity and push imperative action to the edges.
It so sad that as an industry we seemed locked into really bad tools.
Yes, the underlying engine of Picat is - a slightly altered- B-Prolog, which is available as the "bp" module, from which one can use many of traditional Prolog constructs, for example `bp.length/2` instead of Picat's `length/1` function. This can help when porting Prolog programs to Picat. In fact, quite a few Prolog programs can be run directly in Picat, perhaps with just some few adjustment.
This approach to programming is not all new to me (I studied Prolog at university which looks very similar but does not have the planner feature) but the planner is a very elegant and simple way of solving problems.
The comments about video game at the end of the article makes me wonder: the planner feature allows solving problems very easily by writing few lines of clear code. However, how does the performance compares against an algorithm written in imperative programming?
Picat seems to be fairly efficient compared to similar languages [1] but I don't find a comparison against "standard" languages.
This is interesting. The dream of telling the computer where to end up is something I have to.
It might be interesting to someone but I used A* to do code generation to go from one state to target state. I'm not experienced with the planning community or solvers except for playing around naively with ortools.
I generate assembly instructions to move between states.
It finds all the hidden state transitions of function calls to get to the goal.
I also run it in parallel to speed up search using python multiprocessing and do dynamic neighbour generation because neighbour generation is different between threads and I couldn't parallelise A* with my original attempts without sharding this.
The dream of my experimentation is that you tell the computer what you have and what you want it works out the correct traversals for you.
For my personal intuition programming is logistics like factorio or a factory. This is why it's called "sliding puzzle", it's a puzzle where you have to move things around to see the correct picture.
they're syntactically related in that both come from the Prolog world, and you can indeed use ASP to do planning (there's examples of Towers of Hanoi and Blockworld in the ASP guidebook), and you can incorporate heuristics into ASP.
...but actually they're really different!
ASP isn't Turing-complete - it's a lot more like an SMT solver. crucially, there's a grounding stage, where the set of every expressible term in the model (its Herbrand universe) is explicitly written down. so if you have e.g. `f(a;b). g(X,Y) :- f(X),f(Y).` then it will write out every expansion of g during pre-processing.
this makes ASP very powerful, and very fast even at complex problems, but it dooms the solver if the universe is large.
in contrast, Picat is basically a souped up Prolog. it's a full programming language, and it doesn't require grounding so infinite state spaces are okay. it leverages its tabling mechanism to memo-ize predicates evaluation, and it automatically manages the time/space tradeoff with search, which is nifty. but at the end of the day it's brute force, not deep witchcraft like Z3.
I'm a little confused about how planning is different from vector reachability, which, from what I understand, has Ackermann complexity rather than EXPTIME. Can anyone help me out with the constraints on "planning" that allow it to be solved in a sane amount of time?
> [...] from what I understand, has Ackermann complexity rather than EXPTIME.
Really bad worst case times aren't necessarily bad in practice, if most instances you actually encounter can be solved quickly (especially if you are happy to be satisfied with worse than proven-optimal solutions.)
Compare how Hindley-Milner type inference, which forms the basis of Rust's or Haskell's type systems, is double-exponential in the worst case (or something like that), but typically fast in practice.
We normally pick up stuff like that in papers, blog posts, etc. The only heuristics book I remember was How to Solve It. I also found a survey paper on heuristics. Here they are in case they help:
I think in classic planning like STRIPS, the size of the state space is bounded exponentially in the size of the input (conditions can be either true or false).
This is not the case of vector reachability.
I'm not familiar with picat, but the arithmetic operations used in the blog post suggest to me that the size of the state space is not necessarily bounded by the input. I believe the mention of EXPTIME in the post was removed.
Funny, I actually appreciated that. I only wanted broad strokes and didn’t want to dig into detail, so having the explanation hidden made it easier for me to read.
Minor sidenote: For Prolog-esk syntaxes, rather than writing "comma first" to solve the problem of moving lines around, I've found that ending predicates with a simple `true.` solves it more elegantly.
I prototyped a system to orchestrate maintenance on fleets of devices. The idea was that, rather than telling the system how to do it (e.g. workflows to roll out an update), you'd tell the system what you wanted (e.g. up to date machines), what actions were available (e.g. pull a machine from rotation, apply an update), and what constraints to obey (e.g. X of Y machines must be online, don't work in more than two regions simultaneously.)
I modeled a few scenarios like that in Picat and had it generate optimal plans. It worked swimmingly for pet problems, but predictably fell over scaling to cattle sizes. Planning is EXPTIME after all (e.g. Towers of Hanoi).*
Picat does have an escape hatch - you can define heuristics - so I built a random forest of state predicates and trained a naive Bayes classifier to predict fruitful paths. But even with that, and symmetry breaking constraints, and even some hierarchical planning, I couldn't make it work without too much handholding.
It's still AI winter for the classic GOFAI problem domains, apparently. :/
* maybe not, actually, if you reformulate the planning problem as returning a polynomial-time generator of a potentially exponentially long plan