I used to use Prolog a lot on R&D projects at SAIC but since becoming a consultant, few customers have wanted work done in Prolog (it has been about 5 years since anyone has paid me to do Prolog).
Prolog is awesome for some types of problems like graph operations, ad-hoc rule systems, planning, etc.
If you want to play/learn, I would recommend Swi-Prolog. Jan Wielemaker does a great job writing and maintaining Swi-Prolog. The semantic web libraries and other Swi-Prolog libraries are very good stuff!
Thanks for weighing in - I am a somewhat frequent visitor to your blog and websites. This type of discussion (eg, using uncommon or less well-known languages to be more productive or reason about an approach to a problem domain) comes up here on Hacker News frequently.
I have been searching for new work opportunities that would involve more freedom in programming language choice (specifically allowing me, as the developer, to explore tools like Prolog, Lisp, and Ruby as appropriate). Sadly, this has been somewhat fruitless in my geographic region. There is a (probably well-founded) desire on the part of companies to use what we can refer to as "lowest common denominator" tools (Java, C#, Oracle, MS SQL Server) with the occasional upstart using Ruby/Rails. When I say LCD tools, I don't mean to denigrate those languages/platforms, merely point out that companies seem more comfortable knowing that there is a large pool of potential programmers available with those skills.
PG has weighed in on the side of starting your own company and using more powerful languages as a competitive advantage (the whole blub/magic of Lisp set of essays). It seems that few companies see programming language as competitive advantage or (based on your comment) be interested in hiring consultants such as yourself to implement higher level software that leverages these uncommon languages.
I'm curious - have you had projects where you felt that Prolog or Lisp represented a "better" choice than Java or Ruby but had customer requirements that specified language choice upfront? Have you "pitched" Prolog or Lisp to customers, maybe saying something like "ah, using Prolog here would require only 20 hours of my dev/consultant time, but using Java will require an 80 hour solution"?
I'm quite interested in using Prolog, Scheme, or Lisp at the "day job" so hearing any of your thoughts might help better prepare my own internal pitches . . .
Hello Tom, first I should say that Ruby is my favorite language and fortunately there is a lot of demand for Ruby development. I don't really pitch customers on language since almost everyone I work for is very tech saavy and know what they want they want to use.
If you want to work in a particular language such as Common Lisp I suggest writing a useful open source project and that could lead to work in Lisp, and at the least you will learn something and have fun.
Yes, but the raison d'être for Prolog is to do declarative/logic programming, which is fundamentally about programming by writing a clear, logical model of the problem, that incidentally happens to be executable / queryable.
"Declarative programming clears the mind." - Sterling & Shapiro, _The Art of Prolog_
Provided you don't abuse the cut operator, a Prolog program will consist of a series of rules which each have a logical meaning in isolation. Reasoning about Prolog code is so different from trying to unravel what's actually happening with several layers of OO polymorphic dispatch, it's not even funny.
"Elegance is not optional." - Richard O'Keefe, _The Craft of Prolog_
I've always been a fan of the "Don't just do something! Sit there." approach to work. Not only do I sometimes find really clever ways around a problem, or weird issues that wouldn't immediately be apparent, I also save a whole bunch of potential RSI :)
Added benefit: When it is time to start coding, you can amaze your coworkers by brain dumping a decently coded solution at typing speed.
That's one of the most understated advice. Quite often managers like to see lots of code, or developers have screwed up incentives and get payed by the lines of code. If a manager walks by, their heart gladdens when they see you banging on the keboard like mad -- to them that equals to "work being done". They fail to realize that sometimes that equals to "more un-necessary garbage added to the existing pile of code" that will eventually make the product unstable, brittle and hard to maintain.
Sitting there with a notepad and a pencil thinking about the problem is when I find I actually solve the difficult problems not when I type furiously at a keyboard.
One day myself and another guy were debugging a piece of code by staring at it, figuring the logic out in our heads and pitching theories back and forward. The boss walked past, saw we were sitting there "doing nothing" and told us to do some work... ugh!
I have no idea how this works, but it does: when sharing a screen with someone and doing work: if one of you is pointing at something code-looking, the boss reverts to thinking you are working. This works even if you are discussing non-work things... its not content they care about but form :/
At a talk, someone (might have been Eric Evans) said something along the lines of when he was modeling a domain with a group, he wouldn't let them leave until they had three different ways of doing it. Even if the first was perfect and the last two ended up being stupid, forcing them to brainstorm other ways of doing it often came up with hidden gems.
Learning Prolog was one of the most exciting things about getting a C.S. degree. It's vastly underrated, though admittedly not very useful in real-life.
The book Seven Languages in Seven Weeks has a chapter on Prolog. There is an interview in there with a guy who was doing research on dolphins and used Prolog to build a dolphin simulator. As they learned new things about the dolphins they would add "facts" to the simulator and used it to try to discover new things.
They could also use it for "debugging" training problems. He had a story about a dolphin who would apparently ignore a specific instruction and just sink down into the tank. They were able to use the simulator to figure out that the problem was not with the dolphin, but with both the instruction given and the surrounding environment at the time they would give the instruction. It turned out the dolphin was actually "following the rules", but the trainers failed to see it.
He also talks about using Prolog to build a scheduling system. Access to trained dolphins is a precious resource and it was important to find the optimal schedule for research. With a handful of facts and rules, Prolog was able to discover it for them. Pretty neat stuff.
If you want to learn about Prolog, the best book IMHO is _The Art of Prolog_ by Sterling and Shapiro. The newer edition is out of print and expensive (but worth it!), the previous edition is more like $6 and will give you a good taste. _Programming in Prolog_ by Clocksin & Mellish is a pretty straightforward overview of the language, but TAoP really gets at Prolog's real potential.
I also liked _Clause and Effect_, which is somewhat like a "Little Schemer" for Prolog, with a couple fairly substantial projects at the end. "Learn Prolog Now" (http://www.learnprolognow.org/) is free online, but a bit basic.
CTM has a great chapter on relational programming and Prolog, too - Peter Van Roy worked on Aquarius Prolog (http://www.info.ucl.ac.be/~pvr/Peter.thesis/Peter.thesis.htm...), and he also follows up with constraint programming. I highly recommend using a Prolog with constraint programming libraries, because it will make a big difference when Prolog's default behavior is too naive.
I would recommend learning Prolog, because it's sufficiently different from every other language that it will change the way you think, and even if you never use it in production, it's an excellent prototyping language. As a bonus, learning Prolog will help with Erlang, and vice versa. (Erlang was initially a DSL on SICStus Prolog, and many Prolog idioms came along.)
> As a bonus, learning Prolog will help with Erlang, and vice versa. (Erlang was initially a DSL on SICStus Prolog, and many Prolog idioms came along.)
As far as Erlang I have always been put off by the lack of backtracking. They took the best feature from Prolog and threw it out. Everything else is already strange and quite awkward, and those are the parts they kept but threw away backtracking. (I am looking at Elang as a general purpose language here, I know that there is no need for backtracking in a realtime telephony system).
By the way, if you want to experiment with prolog try SWI prolog. It's open, it has an IDE and a bunch of modules, including support for CHR (constrain handling rules).
The (at the time de-facto) standard for Prolog shifted between the two editions, and the latter has several more chapters with larger example projects. Still, the core of the book is about logic programming itself, with Prolog is used an example language.
Keep in mind you will not learn enough about any of these languages that you can run off and build something. Each chapter covers a different language and primarily discusses some of the features of each that you may not see in another. It's kind of like a programming language buffet where you can have a small taste of a bunch of different things.
CTM (http://www.info.ucl.ac.be/~pvr/book.html) is a similar treatment, but covers about a dozen programming models (paradigms' trade-offs, semantics) rather than languages. Comparisons between languages can often get bogged down in surface details.
I once worked with a guy who was using Prolog to develop a system that devised optimal cargo hold layouts for shipping barges. Seems fairly "real-life" to me.
Optimal packing problems are fun 1-hour programming contest fodder - so many problems that you might solve in Prolog are trivially written imperatively or functionally as combinatorial or permutational recursive searches with pruning - but they are such a small kernel of a real-world application that I don't think they justify the complexity of linking in another language, runtime, etc.
Where you're dynamically building up facts and clauses, and you need more general solving logic at run time, then I think Prolog is more warranted. But that's a smaller niche, IMO.
> trivially written imperatively or functionally as combinatorial or permutational recursive searches with pruning
In other words: an ad hoc, informally-specified, bug-ridden, slow implementation of half of Prolog.
I agree that Prolog would probably fare better as an embedded language like Lua, though - it's best suited to things like rules engines and parsers, rather than whole applications.
No, you don't need a general-purpose solution for this kind of problem, so you don't need to implement an inference engine - so you wouldn't have an implementation of Prolog.
Most of these kinds of problems are only about 100 lines of code even in fairly verbose languages, depending on how complicated the set you need to permute / select from is, and how complicated your clauses are.
I speak from experience here, solving packing problems in programming competition questions. Pretty much any high school / college programming competition includes one of these kinds of problems, and the time budget is rarely a lot more than an hour, as it's usually one of the simpler questions.
If only reality was based on timed competitions that 15-year-olds win. However, the problem that my colleague was solving was a lot more complicated than stacking a bunch of static boxes in a idealized compartment. In fact, the type of "boxes" and the containment structures were not known at the time of the application development. Instead, he only knew that each could be described along various parameters. For example, there are these things called Foo and they can never be stacked on top of a Bar unless there was a Baz in between, but even then only if the transport was could support that load. These were very complex logistics rules. I suppose you could have written that all in a C loop, but you'd be toast when the new Quuxes were released and new ships were developed, or yada yada yada. As it turns out Prolog worked great.
I guess programming an open-ended system would have taken the 15-year-olds an extra 6-7 minutes.
The more open-ended the clauses are, the more complex it gets, for sure, because you end up embedding a DSL to encode the rules (e.g. expression trees with a simple interpreter); and the more you push up that complexity, the more sense a niche solution like Prolog makes.
What a lot of this ends up coming down to is the domain expertises of the code author. If you're uncomfortable with recursion, or writing simple interpreters, then Prolog is a much bigger win. Otherwise, it has a higher hurdle to climb.
Interpreters themselves are pretty trivial. Writing a parser and interpreter for a simple expression language (no control flow or anything like that, just a predicate possibly with some arguments) should by itself take no more than an hour. I know from past experience it takes me about 25 minutes, as it's one of the first things I do when I learn a new language.
You know what? You're right, certainly moreso than people are giving you credit for. Writing a backtracking, depth-first-search brute-forcer really isn't that hard (especially if you have continuations). It's once you've also added e.g. unification that you're well on the way to reimplementing Prolog.
Considering that it still requires a significant amount of code that the Prolog solution would not require, and that code will have to be tested and debugged, I don't consider it "trivial" in any way I use the word. If we were talking theory, maybe, but we're talking implementation.
Consider that we know, and have known for some time, proper ways to manage dynamic memory. We still consider doing it correctly in practice not trivial and consider automatic garbage collection a productivity win.
From what I can remember of being an academic researcher (it was a while ago) the term "trivial" was indeed used to indicate something that wasn't of any real research interest.
This does indeed annoy people who put a lot of work into building complex systems that actually do useful stuff.
I do systems research. We only use "trivial" if something is so easy as to be an afterthought. If you're talking theory, then, fine, solved problems are "trivial." But we're not talking theory.
I'm in grad school right now. That sounds about right to me. "Trivial" means "someone already did it, while "easy" means "my graduate student is working on it." :-)
Prolog in effect supplies the recursion into search trees, but for these kinds of problems, it doesn't give you much more than that. Recursion in imperative languages, on the other hand, is not exactly difficult, so long as you're not using an 80s era BASIC interpreter or similar. The clauses largely turn into predicates, Boolean expressions in if-statements. For this kind of static problem, you might have 3x more code in an imperative language, but most of it would be bookkeeping for recursion and backtrack, and syntactic cruft with holes in the middle for your constraints and satisfaction criteria predicates. For most static problems with a fixed set of rules, you really won't be looking at much extra code.
This kind of algorithm is normally expected to be written, tested, debugged and presented in one hour in programming competitions with fifteen year-old contestants. And it's only the algorithm that Prolog would help you with in this case; it's the whole application on the exterior where all the real polish goes, where the testing is more subtle. Writing the core algorithm of a packing problem, or a routing problem, is completely trivial compared to the work that goes into the interface for entering the data. Programming competitions work with a text input file to get rid of this complexity. I speak as one who has competed in and won such competitions.
And FWIW, I don't think GC is a productivity win because managing dynamic memory is not trivial. Managing dynamic memory is trivial, but it's tedious, and it warps your program design, forcing it to be more imperative, more mutable and less functional. The cost of manual management isn't in managing the memory per se, but in managing the complexity of an application that can't rely on GC to manage the memory.
IMO GC's productivity win comes from the lowered bookkeeping costs of more modular and expressive programming styles - most notably in the ability to return newly allocated structures as return values from functions without having to negotiate away the ambiguity as to who owns them. It lets you built data structures that may share substructure, and write transformations over them which may or may not rewrite parts of the substructure, and the transformation may or may not be memoized etc. Doing this with GC is straight-forward; handling all these options with manual memory management requires ownership handoff protocols, shared ownership hacks (e.g. reference counts), inflexible conventions, a preference to mutate rather than return new, etc., greatly increasing code complexity.
And I speak of GC as someone who these days normally writes in a non-GC language, but have implemented garbage collectors, both on the compiler (metadata) and runtime (the marking and sweeping) sides.
"This kind of algorithm is normally expected to be written, tested, debugged and presented in one hour in programming competitions with fifteen year-old contestants."
Would you deploy this code in a production application?
"Programming competitions work with a text input file to get rid of this complexity. I speak as one who has competed in and won such competitions."
So are you then saying that these kinds of problems take about an hour to solve, assuming you are someone who competes in and wins timed programming competitions? Also, what will be the maintenance costs of changing requirements going forward, as compared to adding Prolog clauses?
Well a lot of prolog implementations also feature finite domain and rational domain solvers, global constrains that I don't think your 15 year old buddies would be able to code in an hour.
Having said that, most problems we solve here are NP-Complete, most of the time the trivial pruning you are mentioning does not really extend the limits for the computation.
For sure, I was speaking in the context of the packing problems, and even simpler discrete combinatorial problems, which really are pretty simple. But I wouldn't dismiss trivial pruning lightly; many problems can have their search branches pruned early based on their costs exceeding the current best solution, and as that solution gets better, it will start pruning higher up the search tree.
The knapsack problem, as you know, is NP-Complete.
After you've coded a bunch of programming competition problems, yes, they really are trivial. There's only a handful of techniques, primarily around what kind of set you're going to permute or select from; getting the correct results are easy, but usually exponential in time. Apply some constraints at each recursion to prune the depth, and perhaps memoize them if the substructure is similar, and you very quickly narrow in on efficient solutions. Use some heuristics, and you can sacrifice optimality for more speed.
It's definitely not rocket science. 15 year old kids have been doing this for years in IOI etc.
Now we're mixing up two criteria. Contest problems are small enough that an optimal solution can be found in reasonable time; but since packing problems usually end up NP complete or NP hard, in the real world you'll end up using heuristics.
Optimal packing problems are fun 1-hour programming contest fodder - so many problems that you might solve in Prolog are trivially written imperatively or functionally as combinatorial or permutational recursive searches with pruning
This is true when the problem is stable and well-specified, but when constraints are being added and removed, when there are "nice-to-have" constraints that save money but may become less tractable as the problem scales, when customers want you to experiment with novel constraints -- you do not want to maintain a homebrew imperative implementation. (I can't say anything about a functional version.) Making your implementation maintainable is tantamount to re-implementing a powerful, general-purpose system such as Prolog.
Ok, so Prolog is nice. We know that already. But the story lacks any real content... It's really just: we used Prolog, it seemed to be a lot faster to code and the customer was happy.
Does anyone else have any war stories about using Prolog in their day jobs? It's a tool I'm always on the look out to use to simplify some of my more complex tasks, but so far I haven't had any luck in that regard.
I once had a huge pile of Oracle PL/SQL to take care of, which was written for performance rather than ease of maintenance and used a lot of session level global data.
PL/SQL doesn't give you nice stack traces when you have an error, I just knew the entry point and the place where the error occurred.
Even knowing the data coming in to the entry point, use of the globals meant that you often couldn't know for sure what route the code would take.
I wrote a primitive PL/SQL parser (in Perl) to turn the call tree into Prolog rules, and then used Prolog to give me the possible routes between the entry point and the error location, making debugging much easier.
I think it was about 60Mb of source, around 1000 packages and stand-alone procs.
Basically the perl pulled a list of scoped function names from the DDL, did a text index on the executable code, and searched the index for references to those functions, creating a list of [from.function, to.function] pairs that I fed into prolog.
At a previous job, one of my coworkers implemented a fairly complex role-based access control system by linking a prolog interpreter to our application. Our problem domain was...very unusual, so this was probably the best solution.
It was the similarity to prolog that first got me excited about erlang, I thought I'd found a language that used the same principles that could be used in the real world. Then I discovered there was no backtracking...
So, in other words, the customer's system can now be described as being "a client/server application written completely in [not Prolog] with SQL Server behind it except for this ONE piece - THAT one is in Prolog!"
Next question - Who's going to maintain it?
I mean, props for using Prolog to tackle the complex problem and save gobs of money/time, but doesn't an approach like this seem _really_ risky?
Are there things prolog can do that embedded solvers cannot? In .net, solver foundation, http://code.msdn.microsoft.com/solverfoundation , is a great fit for solving from a set of rules. And I believe most languages can embed some sort of rules optimizing engine.
I've used Drools a fair bit in my current job but I've never learnt much about prolog. Can any one summarise where prolog would work better or provide additional features?
Prolog is awesome for some types of problems like graph operations, ad-hoc rule systems, planning, etc.
If you want to play/learn, I would recommend Swi-Prolog. Jan Wielemaker does a great job writing and maintaining Swi-Prolog. The semantic web libraries and other Swi-Prolog libraries are very good stuff!