Yea - Prolog is super neat and gets you thinking a very different way. It was my first real introduction so took a bit of time getting used to thinking the "Prolog" way.
There is a revival of Datalog, a subset of Prolog, in relational databases and RAD. E.g., see LogicBlox or Datomic.
I always think Datalog should have taken a big chunk of SQL's market. And I wish something like Mozart/Oz was more mainstream so that we could implement subsystems using logic programming wherever needed.
Yea I agree. It really looks like he just took the datalog language, put it in a browser, and added an in-browser IDE (basically like he did with clojurescript and Light Table). Not that this might be incredibly useful, and not that his vision of a billion non-developers able to program might become a reality (through Eve), but I think everyone was expecting something more--something which had that groundbreaking Bret Victor feel-which is what I think what you're alluding to. Unfortunately, I don't think there's anyone that could mentor him--or they would do it themselves--bar possibly Bret Victor. My hope is that his grand plan evolves into something that in sum is as groundbreaking/insightful as we had imagined. At the end of the day, you gotta give it to him regardless. Not many people are going for it like him. I once was, but am back to application development. So nothing but the best of luck to Chris coming from me.
A very interesting aspect of Prolog is how its primary primitive is backtracking search.
This is orders of magnitude more complex than C-language style primary primitives which closely match single machine instructions. Or even slightly more complex models in lazy languages like Haskell.
It's strange that you can build a language almost purely off of graph search yet have it be generally useful. Makes you wonder what other techniques spawn their own paradigm.
There's at least one language (Refal: there may be more!) based on Markov algorithms. Another language (XL: as far as I know this is the only one of its breed thus far) works at its core on parse tree substitution. Yet another (CHR) does a weird generalization of term re-writing to provide a toolkit for making constraint languages. (Yes, a language to make languages.)
There's a reason I've become a programming language whore.
Are there any other languages that are mainly designed with the goal of making it easier to make other general-purpose languages (as opposed to languages for DSLs or parser generation etc.)?
I know that generally many languages are written in C or compile to C, and that Lisp can also be used to make other languages or sub-languages.
But asking about languages (mostly) dedicated for that purpose - creation of new general-purpose programming languages.
That said, the parent's claim about CHR is rather broad, and you may have read too much into it. It's a library for writing new constraint programming libraries for Prolog. Combined with operator overloading, you get embedded DSLs for constraint systems. That's great, and it is "language development" in some sense. But CHR is not a tool for implementing interpreters or compilers for new programming languages (though it may be useful for some subtasks).
Lisp is used for making languages that can usually be all used together in the same project and even mixed together.
If I had to write a C or C++ compiler from scratch, I'd do it in Lisp. That project probably wouldn't work in the usual "language inside Lisp" way; it would spit out assembly code or object files which then have nothing to do with Lisp.
The GNU people who originally wrote GCC (Stallman, et al) wanted to use Lisp; they write in C instead because it was well supported on Unix-like system. Stallman is a Lisper; he was on the ANSI CL committee and of course is well known for the Emacs work.
GCC internals are full of Lisp terminology and "Frankenstein monster" versions of Lisp data structures.
As an example, someone posted the following link on Reddit, in the Japanese subreddit:
It's a GCC commit, whose message discusses the optimization using Lisp notation. The C code doesn't look like Lisp, but the comment explains what it's doing as an S-exp.
"The GNU people who originally wrote GCC (Stallman, et al) wanted to use Lisp; they write in C instead because it was well supported on Unix-like system."
Interesting, did not know they wanted to use Lisp for GCC, though I've read some about Stallman's work.
Edit:
Makes sense, I guess. I've read that functional languages are well-suited to the domain of writing language compilers. Seems logical, because a transformation from, say, a C program to assembly or machine code, can be thought of as a function call:
y = f(x)
where x is the C program, y the machine code, and f the compiler :)
"Stallman is a Lisper; he was on the ANSI CL committee and of course is well known for the Emacs work."
Steele's CLTL (1) gives a list of people who were involved in the actual ANSI XJ13, but Stallman isn't listed.
Stallman, however, is credited in that very same book and section as having worked on an implementation of the "New Error System" (NES) as follows: "A reimplementation of the NES for non-Symbolics Lisp Machine dialects (MIT, LMI, and TI) was done at MIT by Richard M. Stallman. During the process of that reimplementation, some conceptual changes were made which have significantly influenced the Common Lisp Condition System."
There you go: Stallman is noted as having been part of an early "Common Lisp Group", and did some implementation work which influenced the CL condition system.
This was not ANSI CL. This was CLtL1 Common Lisp. X3J13 was formed a few years later. I never heard of him being active in X3J13.
Stallman never did much work with or on CL. I never had the impression that Stallman was very active in CL design. The mailing list protocols would clear this up...
Stallman actually does not like CL and has critized CL features many times and prevented them to be used in GNU Emacs.
The NES was done at Symbolics. Stallman likely tried to copy it for LMI, while he was fighting against Symbolics. But that phase did not last long.
Interesting. IIRC I came across a series of libs from that family recently - for different languages, including Python. Need to check it out. Update: Googled, it's right here:
"There are implementations of miniKanren in Haskell, Racket, Ruby, Clojure, and Python. The canonical implementation is an embedded language in Scheme. The Clojure core.logic library was inspired by miniKanren."
The name kanren comes from a Japanese word for "relation".
The class I was taking kind of blew my mind when I saw how you can solve some problems in such cool ways with Prolog, but I would certainly admit that this server isn't exactly the kind of use it's best suited for.
How so? I mean, there are logic libraries available for python to be sure. But the "set class and comprehension syntax" are two features only tangentially related to logic programming. Using them alone doesn't get you inference, or even unification.
Inference is straightforward using dictionaries, membership-testing, and set methods, like subset, superset, disjoint, union, intersection, and difference.
Python can't do a conditional assignment as nicely as Prolog's unification, but you can get similar behavior via the application of predicates in comprehensions and using set methods. I'm not saying Python is equivalent, but simply that you can get some of Prolog's style in Python. Erlang's pattern matching and concept of bound vs unbound variables is a bit closer to Prolog's unification.
>>> likes = {'mary': {'food', 'wine'},
... 'john': {'wine', 'mary'}}
>>> 'food' in likes['mary']
True
>>> 'wine' in likes['john']
True
>>> 'food' in likes['john']
False
>>> # John likes anything that Mary likes
>>> likes['john'] |= likes['mary']
>>> # John likes anyone who likes wine
>>> likes['john'] |= {name for name, objs likes.items() if 'wine' in objs}
>>> # John likes anyone who likes themselves
>>> likes['john'] |= {name for name, objs in likes.items() if name in objs}
2. Slightly more complicated family tree.
>>> male = {'james1', 'charles1', 'charles2', 'james2', 'george1'}
>>> female = {'catherine', 'elizabeth', 'sophia'}
>>> parents = {'charles1': 'james1',
... 'elizabeth': 'james1',
... 'charles2': 'charles1',
... 'catherine': 'charles1',
... 'james2': 'charles1',
... 'sophia': 'elizabeth',
... 'george1': 'sophia'}
>>> # Was George I the parent of Charles I?
>>> parents['charles1'] == 'george1'
False
>>> # Who was Charles I's parent?
>>> parents['charles1']
'james1'
>>> # Who were the children of Charles I?
>>> {child for child, parent in parents.items() if parent == 'charles1'}
{'charles2', 'james2', 'catherine'}
3. Recursion: Towers of Hanoi
>>> def move_one(x, y):
... print('Move top disk from {0} to {1}'.format(x, y))
... return True
...
>>> def move(n, x, y, z):
... if n == 1:
... return move_one(x, y)
... m = n - 1
... move(m, x, z, y)
... move_one(x, y)
... return move(m, z, y, x)
...
>>> move(3, 'left', 'right', 'center')
Move top disk from left to right
Move top disk from left to center
Move top disk from right to center
Move top disk from left to right
Move top disk from center to left
Move top disk from center to right
Move top disk from left to right
True
4. An example using lists:
This last one is a bit silly in Python as all the tools shown are built-ins.
I appreciate you taking the time. Your approach shows the first steps towards building an inference engine in Python, but these are all very trivial examples. The difference between this and a real logic language will become apparent as the database of facts and rules becomes more complicated.
In order to follow this through, you would need to separate all of search code (i.e. the comprehensions) from the declaration of facts and rules. If you were successful in doing so, you will have made a basic logical inference engine in Python.
Consider one of the queries described in the family tree example: "M is the mother of X if she is a parent of X and is female"
In Prolog this might look like:
mother(X, M) :-
parent(X, M),
female(M).
Having built that rule, I can ask
1) Who is the mother of sophia? : mother(sophia, X).
2) Who is sophia the mother of? : mother(X, sophia).
3) Which mothers have male children? : mother(M, X), male(M).
The question is, how would you define the `mother` relation using comprehensions in a way that is agnostic to the specific questions that will be asked? You would inevitably need to write different loop for each variant of the question. The trick is to write an engine that, given a query, automatically generates the comprehensions required to find the answers. It isn't an easy problem at all.
To make this even clearer, consider your claim that the tools in part 4 are trivial in python, because they are all built-ins. But this is not true at all! Python's built-ins are one-way functions, rather than relations. In python I can ask "what is the sum of the elements in this list?" but I can't as easily ask "what is a list with 30 unique integer elements that sum to 100" To do this in python you would need to put some thought in to how you would generate lists using nested loops so that all of the properties were satisfied. In Prolog you just list the search criteria and it does the work for you:
size(L, 30),
unique(L),
sum(L, 100).
Edit: By the way, check out pyDatalog if you are interested in a fully featured inference engine built in python:
Prolog relaxes the requirement to think linearly, bottom-up. If you write all the facts and predicates bottom-up it's easy in Python. Dictionaries, sets, and comprehensions take care of everything. As you say, it's trivial.
If you want to write top-down, defining what a mother is before defining who are parents and who are female, then you'll need to write functions and it gets trickier in Python. The sticking point is when to transition from functions to dict/set -- what's a predicate and what's a fact. We could get fancy with decorators to create some funky predicate class, but let's stay away from metaprogramming for the moment.
I agree that Python (without getting fancy) requires the programmer to think through a bit more of the inferential logic than Prolog would under some circumstances. How often do those circumstances arise in practical problems? I'm not sure.
However, I've got to accept the challenge: "What is a list with 30 unique integer elements that sum to 100?"
First, there is no list with 30 unique positive integers that sum to 100.
>>> sum(range(30))
435
Allowing negatives would create an infinity of possible solutions. So, I'm going to rephrase the problem: Generate a list of N unique positive integers that sum to M.
>>> from itertools import combinations
>>> m = 30
>>> n = 5
>>> combos = (c for c in combinations(range(m), n) if sum(c) == m)
>>> next(combos)
(0, 1, 2, 3, 24)
In one of my first jobs out of college, I got assigned to figure out how to make the Tivoli Enterprise Console do something useful. They gave up on the IGS guys and picked me because I took a class that used Prolog in my sophomore year.
IBM bundled some ancient prolog dialect (copyright 1991 iirc) and the product was a mess.
But it was a great time... a coworker and I came up with a way to persist facts to DB2 and decode/standardize all sorts of fubar network device alarms. Good times.
The embedded Prolog was ProLog by BIM, from BIM Systems, which was the Belgian-based competitor to a US-based company called Quintus, in Mountain View; both offered high-end Prolog systems for Unix.
One of the things that ProLog by BIM had was the ability to have Prolog use Oracle or Sybase databases as if they were Prolog's own native database (Spooky23, if you see this, I was wondering if that helped you use DB2 in this fashion when doing your Tivoli work or did you roll your own from scratch?)
Anyway, Tivoli apparently created a macro language, TEC, that was then compiled down to Prolog to run.
Here's an opinionated look at TEC and the decision to use Prolog:
Those BIM features were unknown to us as we had no BIM manuals. We took some open source code in a mailing list that let you dynamically load shared libraries. That's how we interacted with the DB2 client, and we also did a similar thing with Perl so the admins could do some work for us.
The IBM macro stuff was mostly there to power their (Motif based) rule builder GUI and was pretty limited unless you pre-processed events upstream and spoon fed it. That's why the IGS guys struggled -- they were certified against the base product (and had minimal experience beyond installing the thing) and the salespeople were aspirationally selling whatever one of the big financial services places did with the product.
It's been 15 years so I'm hazy but I'm pretty sure I tossed most of the macros that shipped with the product. It was a really fun gig and I learned a lot from it.
Last time I looked (admittedly years ago), it wasn't an unambiguously "better Prolog", because Mercury's static mode system couldn't capture all of what Prolog can do with unification.
A common Prolog programming pattern involves creating a partially instantiated term (a "structure" with "holes") and filling in the details later. This can improve expressivity, composability, and performance.
For example, this code:
L = [_,_,_], maplist(p, L)
enumerates all three-element lists whose members satisfy the predicate p. This technique is impossible in Mercury because it does not allow terms with holes.
Of course you can still compute the same result in Mercury, but you will have to do more work: I think you would have to write a specialized predicate instead of using a higher-order predicate from the library.
Thanks for the explanation. I agree the example is concise and powerful. I'll run it by Mercury users or developers to see what their answer to that is. They might have a good solution or maybe not due to language's design.
You're welcome! I last looked at Mercury about 3 years ago, and I vaguely remember partial instantiation being documented as an eternal low-priority future feature. Maybe there's been progress... or maybe they'll say this is less useful than Prolog weenies like to claim ;-)
Out of interest, in what sort of function do you talk to Mercury users and developers if you don't use it yourself?
Oh I just bring attention to tech that people might be interested in. You're the first to mention a drawback. So, I might just ask one of them if there's a list of thing people like about Prolog hard or non-existent in Mercury. Next time I'll be better informed if anyone asks. That's all.
I do have a slight tie-in to my field where I keep logic systems in my peripheral in case they become useful for (a) executable specifications, (b) connections to provers like Milawa, (c) easier formal verification, or (d) commercial deployment in a good one that almost skips the coding phase by keeping code close to specs. Mission Critical IT claims to use Mercury for (a) and (d) with a finance developer linked here a while back using Prolog for whole app. Got easy extensions and fixes as result. So, these conversations might have benefit later.
That looks really cool. To be honest I hadn't planned to use Prolog too much beyond this project, but I'll give Mercury a try if I decide to dive deeper into logic programming. Thanks!
Have you learned about definite clause grammar yet? Once you do, you may be tempted to rewrite your parser, because it's a much more elegant way of constructing parsing rules.
I have not learned about that, but I'm very interested now. Before I go and google it, do you have a specific place you'd recommend I go to learn about it?
or don't, because .pl is _also_ prolog and if you use something SWI Prolog, which is a windows executable, chances that you're still also using perl are preeeetty small.
It's not a rule I can remember encountering anybody actually following, but it's still theoretically a thing at least up to about a decade ago, which is very definitely perl5 era.
It's not where I thought it was in the perldoc though, so either it isn't in there at all or I screwed up looking for it.
So, roughly, "I think it turns out we're both sort-of wrong" :)
The author states many times that they did this to learn about networking and Prolog, killing two birds with one stone. This is clearly in no way meant to run any serious server software.
Are you saying that people shouldn't teach themselves powerful technologies?
> Are you saying that people shouldn't teach themselves powerful technologies?
That much it was clear they weren't saying. What was being said was that prolog is just about the last lang you'd do this in. Aside from that, I think everyone else was aware that this was an experiment.
To add:
I think this is a great approach. The last highly up voted article suffered from being the same ol' hello world intro to prolog. It's fun to see a real world problem tackled.
My understanding is that there's a very limited amount of research going into Prolog so it ends up being incredibly slow for larger tasks. And a lot of it's functionality can be done via standard SQL so there's not a ton of use cases for it.
Prolog implementations are still the fastest, most researched, and most practical logic languages out there. I don't know what the alternative would be, besides something fundamentally less powerful, like datalog or sql.
SQL only gets you so far. For example, you can't do something as straight forward as logic based arithmetic in SQL.
Ah got it - thanks for letting me know. Do you know how popular logic languages are these days? I've watched a few talks on Datalog so it seems there's something there.
Agree with you on the SQL part and was just making the point that a lot of the intro Prolog examples can be done in SQL.
I don't know how much research is going into Prolog itself, but there are logic programming languages that have seen active development recently, like Mercury https://mercurylang.org/
I also did a stint messing around with Prolog and it was pretty awesome discovering an entirely new programming paradigm. I ended up doing the Priceonomics puzzle (https://priceonomics.com/jobs/puzzle/) in Prolog and wrote up about it here - http://dangoldin.com/2013/06/07/fun-with-prolog-priceonomics...