Hacker News new | past | comments | ask | show | jobs | submit login
Esoteric programming paradigms (ybrikman.com)
397 points by SlyShy on May 1, 2017 | hide | past | favorite | 148 comments



I have two comments about the Prolog code:

First, the article claims: "the sudoku solver above does a brute force search", but that is specifically not the case. In fact, quite the opposite holds: The Prolog Sudoku formulation shown in the article uses a powerful technique known as Constraint Logic Programming over Finite Domains, which automatically performs pruning before and also during the search for solutions. This effectively eliminates large portions of the search space in practice, and degenerates to brute force only in those cases where almost nothing can be deduced from initial constraints. In the particular case of Sudoku, the pruning is especially effective and can in many cases eliminate the search entirely, since the initial constraints (hints) basically determine the whole solution.

Second, yes, it is easy to write an O(n!) search algorithm in Prolog. However, it is almost as easy to implement much more efficient search algorithms in Prolog. For example, here is Quicksort in Prolog:

    quicksort([])     --> [].
    quicksort([L|Ls]) -->
            { partition(Ls, L, Smalls, Bigs) },
            quicksort(Smalls), [L], quicksort(Bigs).
Note how natural the declarative description of "first the smaller elements, then the pivot element, then the bigger elements" is in Prolog. This only requires a suitable implementation of partition/4, which is very easy to implement in at most 7 lines of Prolog code.


Eh, I'm pretty sure that suffers from the problem noted here:

https://news.ycombinator.com/item?id=13375108

See the "genuine sieve of eratosthenes" paper, and also:

http://augustss.blogspot.com/2007/08/quicksort-in-haskell-qu...


I do not dispute your point. However, the main issue is that the worst case complexity of this Prolog quicksort, whether in place or not, is already only polynomial, in contrast to the O(n!) permutation sort given in the article, and only about equally hard to implement.

In Prolog, an arguably more natural sorting method is merge sort, and it can (also) be implemented with asymptotically optimal efficiency in Prolog as follows:

    mergesort(Ls0, Ls) :-
            length(Ls0, L),
            zcompare(C, L, 1),
            halving(C, L, Ls0, Ls).

    halving(<, _, Ls, Ls).
    halving(=, _, Ls, Ls).
    halving(>, L, Ls0, Ls) :-
            Half #= L // 2,
            length(Lefts0, Half),
            append(Lefts0, Rights0, Ls0),
            mergesort(Lefts0, Lefts),
            mergesort(Rights0, Rights),
            merge(Lefts, Rights, Ls).

merge/3 is a built-in in several Prolog systems, and if your system does not provide it, you can easily implement it in less than 10 lines of Prolog code.

Both quicksort and merge sort Prolog implementations are (even in their respective worst cases) vastly faster than the sorting algorithm given in the article (in its worst case, and also in its average case), and are at most marginally more complex to implement. For this reason, I think the article could be improved by simply showing one of these algorithms instead.

We cannot consider it a drawback of Prolog that computing all permutations takes O(n!) time in this language. It's the same for C, for example. If someone wants faster sorting, we can implement it also in a declarative language!

Somewhat strangely, the fact that you can easily write a Prolog program to generate all N! permutations of a list with N elements is more likely to be counted as a disadvantage than as an advantage, in my experience. If people looked at C and Prolog a bit more fairly in such overview documents, they could as well count it as a disadvantage of C that such a program is harder to write in C than in Prolog. Yet, I have never seen a C program implementing permutation sort in such articles, and never seen a comparable argument or suggestion about C as I routinely see it about Prolog.

Permutation sort is slow in C! Well, that's a well-known shortcoming of an imperative language such as C.


FD constraint programming is powerful. Mozart/Oz goes one step further by making the search strategies programmable within the language using the idea of computational spaces.

I think two of the OP's notes are covered in Mozart/Oz - concurrent programming using data floor variables (roughly equivalent to promises) and logic programming, including finite domain constraints.


I agree, with only one small additional remark: In general, a programmable search strategy sacrifices termination guarantees for more generality. That's a completely defensible approach of course, and not an argument against this path of development! Still, it comes at a price, and researchers working in termination analysis may prefer precisely the tighter reins and more stringent resulting properties we obtain when we eschew programmable strategies in specific parts of the language.

For finite domain constraints, such guarantees are a major attraction in the Prolog community: If setting up the constraints terminates, then setting up the constraints plus search also terminates. That's a rather valuable and strong guarantee (since it is the search that can take take infeasibly long more often than setting up the constraints), and it breaks down if the search itself is more generally programmable.


I think you are just using a different definition of brute force than the post is using. It doesn't try all permutations of the digits, but it does do a search over the tree to find the solutions.

Now, to your point, this is really the only way you can solve sudoku. There is an odd belief that you can make an algorithm that "never guesses." And folks think that that would be the definition of a non-brute force algorithm. In reality, you either have a solver that is ready to backtrack, or you have one that can't solve all puzzles.


I am not sure about your backtracking point. Sudoku can be solved by Integer Programming (i.e. a special subset of Linear Programming) and this does not require any kind of backtracking.


While I wouldn't be shocked to know that I'm wrong on requiring backtracking, I am not sure how the integer programming claim refutes it. In particular, those look to still be "search" solvers and almost certainly have to perform some "guess" in making the search.

Do you happen to have a recommended link on how this can be accomplished? First few results in searching just show farming this out to a specialized function in matlab. :)


It is a bit difficult to explain Linear Programming in a forum post (so I will have to default to Wikipedia: https://en.wikipedia.org/wiki/Linear_programming or to provide a link to a nice MOOC: https://www.edx.org/course/optimization-methods-business-ana...).

There is absolutely no "guessing" involved, though: you describe your sudoku problem as a series of linear equations (which represent the edges of your search space) and then the algorithm travels from one vertex to the next until it finds the solution. There is no backtracking at all.


While this may be true for linear programming, is it true for integer linear programming or binary integer programming, which one would presumably use to model Soduku? The Wikipedia article you posted claims that integer programming problems are NP-hard.

> If all of the unknown variables are required to be integers, then the problem is called an integer programming (IP) or integer linear programming (ILP) problem. In contrast to linear programming, which can be solved efficiently in the worst case, integer programming problems are in many practical situations (those with bounded variables) NP-hard. 0-1 integer programming or binary integer programming (BIP) is the special case of integer programming where variables are required to be 0 or 1 (rather than arbitrary integers). This problem is also classified as NP-hard, and in fact the decision version was one of Karp's 21 NP-complete problems.

Here is an article on the topic of "Binary and Mixed Integer Programming" which explains part of the Balas Additive Algorithm (infeasibility pruning, I believe) in terms of backtracking:

> At this point, both of the nodes just created are not eligible for further expansion, so we back up the tree, looking for a level at which one of the nodes is unexplored.

http://www.sce.carleton.ca/faculty/chinneck/po/Chapter13.pdf

Another paper describes a method by Glover characterized as "A backtracking procedure for implicit enumeration".

> A particularization of the procedure based on Balas' algorithm. In S2 we presented and justified a flexible and economical back-tracking procedure for finding an optimal feasible solution of (P) by implicit enumeration.

See Figure 1 on page 182 (or page 6 in the PDF):

http://www.anderson.ucla.edu/faculty/art.geoffrion/home/docs...

This branch of mathematics is not my forte however, so if I've misunderstood then I'd appreciate clarification. It seems like the algorithm is not backtracking in the sense of generating possible solutions and checking them, but is backtracking in the sense of fathoming which next cheapest (partial) solution might be feasible, and abandoning it if proven to be definitely infeasible, in favor of examining the (then) next cheapest potential solution.

Finally, see the following paper that compares and contrasts Constraint Programming with Integer Programming, and characterizes both of them as instances of Tree Search:

http://preserve.lehigh.edu/cgi/viewcontent.cgi?article=1780&...


Extremely Coarse Approximation Alert

Try to imagine this: by manipulating an n-dimensional matrix in a certain way, you get, at each step, a result which is guaranteed to be part of your solution space. You also have a way to find out, at each step, if you have found the optimal solution (in terms of your objective function) or if no solution exists.

So the process goes like this: Put your constraints in the matrix.

  Loop -
    Manipulate Matrix
    Is this the optimal Solution?
      Yes - return solution
    Is the solution space empty/concave/unbound?
      Yes - return error
  End.
No backtracking in the sense of "try this... hmm... no, try this other".

I have used LP professionally in the past, and recently participated in the MOOC I linked above (as a refresher) but I might surely be missing something (the theory I studied at UNI too many years ago - now I just use it as a tool) or overgeneralizing too much. If anyone can provide corrections these will be welcomed.


Could you clarify which algorithm you're referring to? Or name an example algorithm that works in the way you describe? What I find in academic literature seems to characterize Integer Programming as a search problem. See the following article that compares Constraint Programming with Integer Programming:

http://preserve.lehigh.edu/cgi/viewcontent.cgi?article=1780&...

> Since tree search is a basic solution technique employed in both constraint and integer programming, we begin with a generic overview of tree search as a technique for finding feasible solutions to mathematical models.

> Every tree search algorithm is defined by four elements: a node- processing procedure, pruning rules, branching rules, and search strategy. The processing step is applied at every node of the search tree beginning with the root node, and usually involves some attempt to further reduce the feasible set associated with the node by applying logical rules. [...]

> One particularly well-developed solution technique, called branch and bound, was introduced by Land and Doig. Branch and bound is frequently used to find solutions to integer programming problems; it is a tree search procedure in which a bound on the optimal value to each subproblem is obtained by solving a relaxation. In this thesis we assume the use of a linear programming relaxation in the branch and bound tree.

> The last piece of a branch and bound algorithm is a search strategy specifying the order in which to explore nodes of the tree. Depth-first search can be used to save memory in a branch and bound tree, since we must only remember the difference in solution between two successive nodes of the search tree.

Chapter nine of Applied Mathematical Programming from MIT classifies integer program solvers into three major groups:

http://web.mit.edu/15.053/www/AMP-Chapter-09.pdf

> Whereas the simplex method is effective for solving linear programs, there is no single technique for solving integer programs. Instead, a number of procedures have been developed, and the performance of any particular technique appears to be highly problem-dependent. Methods to date can be classified broadly as following one of three approaches:

> i) enumeration techniques, including the branch-and-bound procedure;

> ii) cutting-plane techniques; and

> iii) group-theoretic techniques. [...]

> Branch-and-bound is essentially a strategy of ‘‘divide and conquer.’’ The idea is to partition the feasible region into more manageable subdivisions and then, if required, to further partition the subdivisions. In general, there are a number of ways to divide the feasible region, and as a consequence there are a number of branch-and-bound algorithms.

> An integer linear program is a linear program further constrained by the integrality restrictions. Thus, in a maximization problem, the value of the objective function, at the linear-program optimum, will always be an upper bound on the optimal integer-programming objective. In addition, any integer feasible point is always a lower bound on the optimal linear-program objective value. The idea of branch-and-bound is to utilize these observations to systematically subdivide the linear programming feasible region and make assessments of the integer-programming problem based upon these subdivisions.

The book describes the "cutting-plane" approach, which does seem to work more like how you're describing (a series of transformations), but also says:

> In practice, the branch-and-bound procedures almost always outperform the cutting-plane algorithm. Nevertheless, the algorithm has been important to the evolution of integer programming. Historically, it was the first algorithm developed for integer programming that could be proved to converge in a finite number of steps. In addition, even though the algorithm generally is considered to be very inefficient, it has provided insights into integer programming that have led to other, more efficient, algorithms.


As I mentioned elsewhere, it has been ages since I took an actual formal exam on Linear Programming (and Integer Programming) - since then I always used dedicated sw to solve LP (there is for example a nice, self-contained LP solver in Numerical Recipes in C) and so I always focused more on how to describe the problems in terms of linear constraints.

"In my mind" it works like this: https://www.quora.com/What-is-the-difference-between-integer...

Having said this, while NP is guaranteed to terminate the actual computation time may take ages. Integer Programming may very well leverage the fact that the solution is restricted to integer values and apply different, faster algorithm to exploit this property - including search trees. But in the most general sense you do not need a search tree or backtracking for Linear Programming.


What algorithm for ILP do you use? I think most (all?) solvers use a variation of branch-and-bound.


I am answering here (hoping that it will be read by others too).

First of all: thanks everyone for forcing me to go back and re-visit some stuff about LP/IP - I think I was wrong - some sort of search tree/branch and bound is not just a speedup trick - it looks like it is the only way to actually get a correct solution for IP problems.

So let me amend my two previous statements:

  Linear Programming does not require any search tree or backtracking is CORRECT.
  Linear Programming can solve Sudoku is *WRONG* - for Sudoku you need Integer Programming, and that requires an approach that involves search trees/backtracking.
Sorry for the confusion, and thanks again for making me recheck my assumptions.


I'm pretty sure cutting-plane methods for IP don't strictly require "tree search" (of course, they still require super-polynomial time to solve IP problems)

https://en.wikipedia.org/wiki/Cutting-plane_method


To your original point, they do not consider these "backtracking" in the standard sense. So, I'd actually agree to your original point and that I was wrong.

I should also have stressed harder that I agreed with your definition of brute force. I was just offering an explanation for where I think the post went awry.


No, a domain-consistent all-different propagator such as the one desccribed in [1] is sufficient to determine a valuation domain for a Sudoku puzzle without any search.

Of course that does not hold in general, but in the special case of Sudokus and problem modelling by a domain-consistent all-different propagator, no search is required for a solution.

[1] A filtering algorithm for constraints of difference in CSPs, J-C. Régin, 1994


This thread exploded after I went to sleep. Apologies for the silence and again a huge thanks to everyone else in the thread.

It is possible to craft a sudoku that has a unique solution without having any forced moves at the beginning. Turns out most are not actually hard, but it is doable. When I get back near my book at home, I an post an example, if folks are interested.


To see what I mean, please consider the Prolog program for solving Sudoku puzzles that is shown in this article, and try the following query:

    | ?- sudoku(X, Y).
This is called the most general query, since all arguments are fresh variables. Declaratively, we are asking Prolog: "Are there any solutions whatsoever?" In this case, the system answers with:

    X = [_#3(1..4),_#24(1..4),...etc.]
    Y = [_#3(1..4),_#24(1..4),...etc.]
This shows that the Prolog program did not perform any search at all: No concrete value is instantiated, and the system does not ask for alternatives. That's right! No search whatsoever, and no backtracking at all, is performed in this program. No matter which definition of brute force search you are applying, this definitely does not fall into "search" at all!

In the article, a more concrete query is also shown, as well as its solution. This solution is found via constraint propagation alone. This means that the system has deduced the unique solution purely by reasoning about the available constraints, which it does automatically every time a constraint is posted. This is one example of an algorithm that does not guess.

This also shows that even supposing you cannot make an algorithm that "never guesses", you can definitely make an algorithm that has to do much less guessing than searching over the whole tree. In Prolog, such algorithms are implemented and encapsulated in powerful combinatorial constraints like all_distinct/1, which are available in many Prolog systems and which you can use in your applications. Internally, the associated constraint propagators prune the search tree by applying various algorithms for you behind the scene. The ability to use such constraints and their strong pruning are major attractions of using Prolog for combinatorial tasks.

You are right that there may be cases where such propagation, albeit quite strong, may not suffice to fully solve a concrete combinatorial task. For this reason, you have to apply a concrete enumeration of remaining variables. In Constraint Logic Programming over Integers, this search is called labeling and provided by predicates like fd_labeling/2 or similar, depending on your Prolog system. The article does not use them though, and even if it did, the search could be guided by various heuristics by simply supplying a few options, which together with the pruning applied by constraints distinguish such a search from more uninformed search strategies.

Note also that the posted solution is quite hard to generalize elegantly. A shorter and equivalent Prolog program that describes 4x4 Sudoku puzzles is:

    sudoku(Rows) :-
            Rows = [Row1,Row2,Row3,Row4],
            maplist(same_length(Rows), Rows),
            blocks(Row1, Row2), blocks(Row3, Row4),
            append(Rows, Vs),
            fd_domain(Vs, 1, 4),
            maplist(fd_all_different, Rows),
            transpose(Rows, Cols),
            maplist(fd_all_different, Cols).

    blocks([], []).
    blocks([A1,A2|As], [B1,B2|Bs]) :-
            fd_all_different([A1,A2,B1,B2]),
            blocks(As, Bs).
This assumes the availability of append/2 and transpose/2, which are likely already provided by your Prolog system, and easy to implement if they aren't.


You can actually make a sudoku problem where, by the rules of the game, all open pieces have at least 2 possible values while still having a unique solution. So, to that end, I am unaware of how you could make a solver that doesn't have to guess. (As noted in a sibling post, I wouldn't be shocked to find I was wrong, but I would be interested in the details.)

So, I'm ultimately wary of the claims here. The system has to be doing a search. It may be crafted in such a way that it just walks straight to the solution for some specifications, but that is as much from luck of easy constraints as it is the algorithm.


When you say "all open pieces have at least 2 possible values", how can the solution be unique? Surely for each piece, some values are not admissible, otherwise this would be a contradiction. Sufficiently strong pruning could have eliminated them, either by search or by more reasoning.


First, apologies for going to sleep on you. Huge thanks for sticking with this and opening my eyes to this branch of programming. If you have recommended texts, is appreciate it.

For this question, I can really only refer to Knuth. This is exercise 515 of 7.2.2.2. I likely just messed you up on the wording. There are no forced moves for open squares. As soon as you pick one, though, it may force all others.


In the solution of this exercise, 4 example puzzles are given. Here is the first one:

    knuth(a, [[1,_,_,_,_,6,_,8,_],
              [5,_,8,7,2,1,4,_,6],
              [_,6,_,3,8,_,2,_,1],
              [8,4,_,_,_,3,_,_,5],
              [_,_,5,_,6,_,8,_,_],
              [6,_,_,8,_,_,_,4,2],
              [3,_,6,_,4,8,_,2,_],
              [4,_,7,6,3,2,1,_,8],
              [_,8,_,5,_,_,_,_,4]]).
Consider now a Prolog solution for 9x9 (i.e., "normal") Sudoku puzzles, using integer constraints:

    sudoku(Rows) :-
            length(Rows, 9), maplist(same_length(Rows), Rows),
            append(Rows, Vs), Vs ins 1..9,
            maplist(all_distinct, Rows),
            transpose(Rows, Columns), maplist(all_distinct, Columns),
            Rows = [As,Bs,Cs,Ds,Es,Fs,Gs,Hs,Is],
            blocks(As, Bs, Cs), blocks(Ds, Es, Fs), blocks(Gs, Hs, Is).

    blocks([], [], []).
    blocks([N1,N2,N3|Ns1], [N4,N5,N6|Ns2], [N7,N8,N9|Ns3]) :-
            all_distinct([N1,N2,N3,N4,N5,N6,N7,N8,N9]),
            blocks(Ns1, Ns2, Ns3).
This is indeed exactly a case where search is necessary, because the pruning applied by this concrete constraint solver is not strong enough to deduce the unique solution. If we only post the constraints, we get:

    ?- knuth(a, Rows),
       sudoku(Rows),
       maplist(portray_clause, Rows).
    [1, _, _, _, _, 6, _, 8, _].
    [5, _, 8, 7, 2, 1, 4, _, 6].
    [_, 6, _, 3, 8, _, 2, _, 1].
    [8, 4, _, _, _, 3, _, _, 5].
    [_, _, 5, _, 6, _, 8, _, _].
    [6, _, _, 8, _, _, _, 4, 2].
    [3, _, 6, _, 4, 8, _, 2, _].
    [4, _, 7, 6, 3, 2, 1, _, 8].
    [_, 8, _, 5, _, _, _, _, 4].

However, it is exactly as you say: Even assigning a single variable a concrete value automatically enforces the solution by constraint propagation alone:

    ?- knuth(a, Rows),
       sudoku(Rows),
       Rows = [[_,Var|_]|_],
       indomain(Var),
       maplist(portray_clause, Rows).
    [1, 2, 3, 4, 5, 6, 7, 8, 9].
    [5, 9, 8, 7, 2, 1, 4, 3, 6].
    [7, 6, 4, 3, 8, 9, 2, 5, 1].
    [8, 4, 2, 1, 9, 3, 6, 7, 5].
    [9, 7, 5, 2, 6, 4, 8, 1, 3].
    [6, 3, 1, 8, 7, 5, 9, 4, 2].
    [3, 1, 6, 9, 4, 8, 5, 2, 7].
    [4, 5, 7, 6, 3, 2, 1, 9, 8].
    [2, 8, 9, 5, 1, 7, 3, 6, 4].
Note though the general point: A sufficiently strong constraint solver could have deduced the unique solution even without trying any concrete value, exclusively by reasoning about the posted constraints with sufficient sophistication! This concrete solver can't, due to the particular trade-off between efficiency and pruning strength that was chosen by the implementor, and so search must settle the remaining part.

One of the other puzzles is:

    knuth(d, [[1,_,3,_,5,6,_,8,9],
              [6,8,_,3,_,9,1,_,5],
              [_,9,5,1,8,_,6,3,_],
              [3,_,8,9,6,_,_,5,1],
              [_,1,9,5,_,8,3,6,_],
              [5,6,_,_,3,1,9,_,8],
              [_,5,6,_,9,3,8,1,_],
              [8,_,1,6,_,5,_,9,3],
              [9,3,_,8,1,_,5,_,6]]).
In this case, the solution is not unique. We can use the constraint solver to count the number of admissible solutions, again using search:

    ?- knuth(d, Rows),
       sudoku(Rows),
       findall(., maplist(labeling([ff]), Rows), Ls),
       length(Ls, L).
Yielding: L = 12.

This indeed shows that search is necessary in general also when applying constraint programming. Still, no such search is applied in the posted article. In the example contained in the article, the constraints alone suffice to determine the unique solution. I leave it as an exercise for the reader to determine whether this is the case for all 4x4 Sudoku puzzles.


Thanks for posting. Wasn't sure if folks were interested, but I've found Knuth's fun with Sudoku to be a lot of fun.

I'm curious on how you claim a strong enough solver could have solved this without a "guess."

Also, any thoughts on how treating them as constraint propagation compares to casting it as an exact cover problem?


Regarding the strength of different solvers: The solution I posted is just one out of many possible constraint-based formulations of Sudoku, using one particular domain and one particular solver implementation. In this concrete case, I have used constraint logic programming over integers, also known as CLP(FD), which is available in many Prolog systems. This is one of the most important and most prominent applications of constraint logic programming and even of logic programming in general.

As an alternative, we can just as well model Sudoku puzzles as combinatorial tasks over Boolean variables, and hence use CLP(B), constraint logic programming over Booleans.

For example:

    sudoku(Rows) :-
            length(Rows, 9), maplist(same_length(Rows), Rows),
            maplist(row_booleans, Rows, BRows),
            maplist(booleans_distinct, BRows),
            transpose(BRows, BColumns),
            maplist(booleans_distinct, BColumns),
            BRows = [As,Bs,Cs,Ds,Es,Fs,Gs,Hs,Is],
            blocks(As, Bs, Cs), blocks(Ds, Es, Fs), blocks(Gs, Hs, Is).

    blocks([], [], []).
    blocks([N1,N2,N3|Ns1], [N4,N5,N6|Ns2], [N7,N8,N9|Ns3]) :-
            booleans_distinct([N1,N2,N3,N4,N5,N6,N7,N8,N9]),
            blocks(Ns1, Ns2, Ns3).

    booleans_distinct(Bs) :-
            transpose(Bs, Ts),
            maplist(card1, Ts).

    card1(Bs) :- sat(card([1],Bs)).

    row_booleans(Row, Bs) :-
            same_length(Row, Bs),
            maplist(cell_boolean, Row, Bs).

    cell_boolean(Num, Bs) :-
            length(Bs, 9),
            sat(card([1],Bs)),
            element(Num, Bs, 1).
In this formulation, I use 9 Boolean variables to represent which of the 9 possible integers is assigned to a particular cell.

Let us apply this formulation to puzzle (b) that is shown in the solution of this exercise:

    knuth(b, [[1,_,3,_,5,6,_,8,9],
              [5,9,7,3,8,_,6,1,_],
              [6,8,_,1,_,9,3,_,5],
              [9,5,6,_,3,1,8,_,7],
              [_,3,1,5,_,8,9,6,_],
              [2,_,8,9,6,_,1,5,3],
              [8,_,9,6,_,5,_,3,1],
              [_,6,5,_,1,3,2,9,8],
              [3,1,_,8,9,_,5,_,6]]).
Here is the result:

    ?- knuth(b, Rows),
       sudoku(Rows),
       maplist(portray_clause, Rows).
    [1, 2, 3, 4, 5, 6, 7, 8, 9].
    [5, 9, 7, 3, 8, 2, 6, 1, 4].
    [6, 8, 4, 1, 7, 9, 3, 2, 5].
    [9, 5, 6, 2, 3, 1, 8, 4, 7].
    [7, 3, 1, 5, 4, 8, 9, 6, 2].
    [2, 4, 8, 9, 6, 7, 1, 5, 3].
    [8, 7, 9, 6, 2, 5, 4, 3, 1].
    [4, 6, 5, 7, 1, 3, 2, 9, 8].
    [3, 1, 2, 8, 9, 4, 5, 7, 6].
For comparison, here is the result of the CLP(FD)-based formulation that I posted earlier:

    ?- knuth(b, Rows),
       sudoku(Rows),
       maplist(portray_clause, Rows).
    [1, _, 3, _, 5, 6, _, 8, 9].
    [5, 9, 7, 3, 8, _, 6, 1, _].
    [6, 8, _, 1, _, 9, 3, _, 5].
    [9, 5, 6, _, 3, 1, 8, _, 7].
    [_, 3, 1, 5, _, 8, 9, 6, _].
    [2, _, 8, 9, 6, _, 1, 5, 3].
    [8, _, 9, 6, _, 5, _, 3, 1].
    [_, 6, 5, _, 1, 3, 2, 9, 8].
    [3, 1, _, 8, 9, _, 5, _, 6].
This shows that CLP(B), in contrast to CLP(FD), has determined the unique solution without any search. This is an example of a solver that propagates as strongly as possible, and hence achieves global consistency even across different constraints.

It also works for puzzle (a):

    ?- knuth(a, Rows),
       sudoku(Rows),
       maplist(portray_clause, Rows).
    [1, 2, 3, 4, 5, 6, 7, 8, 9].
    [5, 9, 8, 7, 2, 1, 4, 3, 6].
    [7, 6, 4, 3, 8, 9, 2, 5, 1].
    [8, 4, 2, 1, 9, 3, 6, 7, 5].
    [9, 7, 5, 2, 6, 4, 8, 1, 3].
    [6, 3, 1, 8, 7, 5, 9, 4, 2].
    [3, 1, 6, 9, 4, 8, 5, 2, 7].
    [4, 5, 7, 6, 3, 2, 1, 9, 8].
    [2, 8, 9, 5, 1, 7, 3, 6, 4].
At this point, you may wonder: Why have I started with (b) instead of (a)? The reason is easy to explain: For puzzle (a), the strong propagation of the CLP(B)-based formulation took so long that it only completed now, after I have finished typing this, a few minutes after I had posted the query, whereas it only took a few seconds for example (b).

Both examples have taken considerably longer to solve with CLP(B) instead of CLP(FD). That's not to say that one approach is better than the other in general or even in that particular case - instead, it is more an observation about the solver implementations and the chosen trade-offs between propagation strength and efficiency in both solvers.

CLP(B) achieves this strong consistency by internally considering, in a sense, the totality of all possible assignments, and this may require exponential space and hence also time. But for many important practical cases, the representation used by CLP(B) is very efficient, and indeed allows us to solve tough problems efficiently. Note that still, no search is involved here: Even the internal propagation of CLP(B) proceeds completely deterministically, and at no point in the computation do we have to guess anything.

It must also be said that the CLP(FD)-based formulation, in addition to being faster (despite the necessary search), is also shorter and arguably more natural than the Boolean variant. Casting this as an exact cover or hitting set task can definitely be done, either with CLP(B) or CLP(FD) constraints (since Booleans can also be considered as a special case of integers). However, it takes us even further away from the quite direct and readable "natural" CLP(FD) formulation. For cover problems in general, I recommend you obtain a CLP(B) solver that internally uses Zero Suppressed Binary Decision Diagrams (ZDDs). They let you efficiently represent cases where many of the variables (such as row selection indicators) are zero, which is typical for covering tasks.


Consider the Eight Queens problem. There are multiple valid solutions to the problem on a given chessboard. They are symmetric solutions, yet distinct. Any algorithm that finds a single solution and not all of them must be prioritizing arbitrarily among solutions, i.e., searching or fathoming them in some particular order.

> This solution is found via constraint propagation alone. This means that the system has deduced the unique solution purely by reasoning about the available constraints, which it does automatically every time a constraint is posted. This is one example of an algorithm that does not guess.

Have you considered how that constraint propagation algorithm works? What specific algorithm are you referring to? Constraint Programming is an alternative formulation of the same problem that Integer Linear Programming tackles, and the latter is known to be NP-hard.

https://en.wikipedia.org/wiki/Linear_programming#Integer_unk...

Constraint Programming and Integer Programming are both characterized as examples of Tree Search:

http://preserve.lehigh.edu/cgi/viewcontent.cgi?article=1780&...

> Since tree search is a basic solution technique employed in both constraint and integer programming, we begin with a generic overview oftree search as a technique for finding feasible solutions to mathematical models. [...]

> Every tree search algorithm is defined by four elements: a node- processing procedure, pruning rules, branching rules, and search strategy. The processing step is applied at every node of the search tree beginning with the root node, and usually involves some attempt to further reduce the feasible set associated with the node by applying logical rules.

> Overall a tree search algorithm works as follows. A list of nodes that are candidates to be processed is maintained throughout the algorithm. This list initially contains only the root node. At each step in the algorithm, a node is chosen and processed. If the processing results in pruning of the node by one of the pruning rules, then the node is discarded. Otherwise, the node is further divided, resulting in two or more children, which are then added to the list of candidates. This procedure is iterated until no nodes remain on the candidate list.

> [...] the constraint program is made up of two pieces: the constraints that comprise the formulation and a search strategy for solving the problem (emphasis mine). This is in contrast to a mathematical program, which describes a model but does not specify how to solve it.

> Traditionally, a constraint programming formulation consists only of a list of decision variables, domains, a set of constraints to be satisfied, and a search procedure (emphasis mine). This traditional definition refers to a constraint satisfaction problem (CSP), since a solution is any assignment of values to variables such that all constraints are satisfied.

Can you clarify which algorithm you're referring to, and how it manages to solve the problem without employing Tree Search?


One of the most important and characteristic aspects of constraint programming is alluded to in the following part of your quote (emphasis mine):

"involves some attempt to further reduce the feasible set associated with the node by applying logical rules"

These propagation rules are among the major attractions of CP, and distinguish the paradigm from uninformed search strategies that have to consider much larger portions of the search space in general. I think a description of CP should put at least equal emphasis on this pruning mechanism as on the actual search itself, because sufficiently strong propagation may render the search completely unnecessary. In fact, this is exactly what happens in the Sudoku example of the article!

In a sibling thread, sfrank has cited a very important reference as an example of such a propagation algorithm, which serves to illustrate this idea and is also widely used in practice in CP systems and their applications:

A filtering algorithm for constraints of difference in CSPs, J-C. Régin, 1994

To see this algorithm in action, consider the following example: Take three variables X, Y and Z, all of them integers that are either 0 or 1, with the additional constraint that they must be pairwise distinct. In Prolog with CLP, and said algorithm available under the name all_distinct/1, we can state this task as follows:

    ?- X in 0..1, Y in 0..1, Z in 0..1,
       all_distinct([X,Y,Z]).
In response to this query, the system says:

    false.
This means the system has deduced that there are no solutions. Note that this result is obtained without applying any search! For comparison, suppose we naively post pairwise disequalities instead:

    ?- X in 0..1, Y in 0..1, Z in 0..1,
       X #\= Y, Y #\= Z, X #\= Z.
In response, the system now answers:

    X in 0..1,
    X#\=Z,
    X#\=Y,
    Z in 0..1,
    Y#\=Z,
    Y in 0..1.
This means that there could potentially be solutions. The system does not know whether there are any, so it shows us remaining constraints that must hold for any solution. Now, an additional search makes clear that there are none:

    ?- X in 0..1, Y in 0..1, Z in 0..1,
       X #\= Y, Y #\= Z, X #\= Z,
       label([X,Y,Z]).
Here, I have simply added the goal label/1, which is the explicit enumeration and thus the search I have mentioned in an earlier post. In response, we now again of course get:

    false.
As another example of propagation, please consider:

    ?- X in 0..1, Y in 0..1, Z in 0..2,
       all_distinct([X,Y,Z]).
    Z = 2,
    X in 0..1,
    all_distinct([X, Y, 2]),
    Y in 0..1.
Here, the system has deduced that Z is necessarily 2, again without any search. For the other variables, both values are still admissible, and to obtain concrete solutions, we must again label them explicitly. However, we do know that there is a solution in this case, due to the strength of all_distinct/1, which internally uses a complex reasoning about the value graph of the CSP that is somewhat anticlimactically encapsulated in such a superficially trivial predicate, but propagates much more strongly than simply stating pairwise disequalities would.

This shows that you can arrive at a solution (or prove the lack of a solution) either via a search or by using a sufficiently strong propagation mechanism. Doing more of one requires less work of the other. Please note that, as I said before, you indeed need an explicit search in general, because the propagation algorithms alone are, while often quite effective, not effective enough in general to guarantee consistency when different constraints interact, even if they guarantee domain consistency individually. This is a practical trade-off between propagation strength and efficiency of the available constraints. You also need search to enumerate all solutions, if there are more than one. However, note that propagators are also triggered during the search, and so unifying even a single remaining variable with a concrete integer may again lead to a situation where no additional search is necessary. Thus, the initial size of the search space is not a satisfactory measure for the eventual complexity of the search, because it does not account for the additional propagation that is applied during the search itself.

Even in the case of Sudoku, and even if you use the most powerful individual all_distinct/1 constraints, you need to search, in general, to truly generate the unique solution explicitly. But the Prolog code shown in the article doesn't (hence, it will lead to floundering constraints in general), and the concrete puzzle shown in the article is, coincidentally or not, correctly solved just by such propagation algorithms, without any search. This situation is also not particularly rare: In many cases of practical importance, constraint propagation alone already tightens remaining domains so significantly that no or only very little additional search is necessary.

Thus, everything you said is true: Yes, you need search in general, also when using CP, to obtain concrete solutions. However, to repeat, it is quite different from what one would call "brute force" search because the sophisticated and often very effective propagation algorithms prune the search space, often substantially, and can moreover help to guide the search with various heuristics. In practice, you typically buy a Prolog system just to benefit from these propagation algorithms! Various notions of consistency exist, and you can find more such algorithms in the literature references that are for example included in SICStus Prolog and other Prolog systems with constraints.

Second, note that the description you quote does not even cover the particular case that arises in the article and also many other cases in practice: That is, there are no nodes at all, because everything is already settled by constraint propagation before any search even starts, making the whole solution explicitly available without search.

The Eight Queens problem you cite is quite different from Sudoku, because it may have multiple solutions. For this reason, as you correctly observe, search is needed to generate the different solutions in such cases. But this does not invalidate the Sudoku example, where we know that exactly one solution exists, and sufficiently strong pruning could always determine it, whether by internal propagation or other algorithms that simply eliminate all values that do not participate in the unique solution of the puzzle. Nor does it show that the cost for the search outweights that of constraint propagation in either case.

For these reasons, I recommend to put at least equal focus on constraint propagation as on the search when discussing CP.


Pruning is an optimisation though, not a computational reduction... the remaining computation is still brute force. While it may be effective with low n, the complexity will still grow in roughly the same way as n increases - for this reason I would still call it brute force.

Isn't this why sudoku is still considered NP hard? (unless there has been a recent breakthrough).


'Concurrency-by-default' is similar to a notation I've been using to map out async service calls. It's just this: lines are terminated with "," or ";". A comma doesn't block and all comma-separated lines are executed in any order, while a semicolon blocks. Names are only usable when a semicolon is reached, and a semicolon unblocks flow when all preceding names are bound. Probably code is scoped into { } blocks. So a lambda is like "pyth_distance = {x, y; sqrt(x^2 + y^2)}". A series of async callbacks would be given by an inner block {x = call1(), y = call2(); pyth_distance(x,y)}, allowing you to do any manipulations you can do with normal code.

Might try making a toy language out of it eventually.


You're close to inventing monads! I did this once too for async reasons also, and it's how I wandered into the world of Monads and functional programming. I would definitely take a look at Scala's futures and what it means to be a monad. This is probably the best way to internalize them. :)



Hm the terminators are much like shell, e.g.

    # synchronous
    sleep 1 ;
    sleep 2 ;

    # parallel
    sleep 1 &
    sleep 2 &
    wait
    wait
The results have to be files... in shell you think of the file system as your "variables", so everything is somewhat global.


Interesting concept. The article mentioned that other languages determine blocking automatically based on usage of the variables. So in your example, it could be determined that x and y must be calculated first before pyth_distance is called with x and y. Would you incorporate this kind of automatic detection into your language? Why or why not, and to what level? Would it perhaps be safer for the language to determine blocks for you?


This reminds me of writing assembly for VLIW DSP chips,where each 'instruction' is actually several that run concurrently.

https://en.m.wikipedia.org/wiki/Very_long_instruction_word


HDLs generally fit this paradigm. You might want to have a look at VHDL or Verilog.


Urbiscript is another spin on this idea, dedicated to robotics in this case. The development seems to have stalled but the ideas are really interesting.


That's really cool.

Would you build something like this from scratch or base it on another tech stack like the jvm?


That's exactly what || (concurrent statements) and ; (sequential statements) do in Esterel.


Thanks, was totally unaware of that. For the lazy and ignorant like me:

https://en.wikipedia.org/wiki/Esterel

(if you'd like to throw a link in your comment, would be happy to delete this one)


No need :) But I'll happily add this one, which is a nice short intro if you know nothing about it: http://www.embedded.com/design/prototyping-and-development/4...

I came in contact with the language during my studies (one of my profs worked on it), found it interesting, but haven't really had a chance to apply it to anything I've done professionally since. It's a very neat language though, and if I got to do anything in the embedded field.

I notice that there is now an open source compiler, which is great. The only implementation I was aware of so far was closed source.


I don't know if Prolog should be called esoteric. Prolog is an ISO-standardized language after all, and its syntax has been used for 4+ decades now in most papers on conceptual database system and query language design I've read. Which isn't surprising since Prolog syntax, being based on operator-precedence grammar concepts, is arguably as minimalistic as it gets. There's definitely also a lot of interest lately in Datalog, not just as a decidable logic fragment (which has been used for decades as well), but also as a practical non-SQL database query language and proper Prolog syntax subset.


Considering they also mention SQL and Forth, I think this could probably equally be called "things that confuse someone who only knows C".


I think both Forth and SQL are outliers in their strangeness. Most people only have to deal with SQL on a very shallow level and barely regard it as programming at all.

Forth is known by name and reputation but very few people have ever tried to write anything in it - let alone delve into it's strange bootstrapped nature.


FORTH ?KNOW IF HONK ELSE FORTH LEARN THEN


  > Most people only [...] deal with SQL on a very shallow level and
  > barely regard it as programming at all.
Fixed that for you.

And now the sentence states why there are so many problems in the world (of software).

;)


I think the point is, most people being paid to program don't know it, or don't know it very well. (EDIT: Granted, this may be more of a statement about the "mainstream" than it is about prolog).


As far as I can tell, the author didn't call any of these languages esoteric (the word appears once, in reference to the list itself), it's editorial titling, presumably because the actual title of the article sounds horribly click-baity.


> There's definitely also a lot of interest lately in Datalog

Incidentally, Aurora, mentioned in the article, evolved into Eve, which is inspired heavily by Datalog.


To this I would add synchronous programming[1], which is particularly suited for interactive or concurrent programs and formal reasoning, and has had success in industry in safety-critical realtime systems. Examples include Esterel[2] and SCADE, and outside realtime, Céu[3] and Eve[4] (the latter is based on Dedalus[5], which combines SP with logic programming).

As someone who loves formal methods and believes most mainstream software systems today are mostly interactive and/or synchronous, I think this paradigm has a lot of untapped potential, and I'm glad to see it slowly move out of safety-critical systems into the mainstream, in languages like Eve.

[1]: https://en.wikipedia.org/wiki/Synchronous_programming_langua...

[2]: https://en.wikipedia.org/wiki/Esterel

[3]: http://www.ceu-lang.org/

[4]: http://witheve.com/

[5]: https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-...


Yeah, there are a lot of interesting benefits to synchronous programming that haven't been explored in a wider context and we're excited to be able to do so. Figuring out how to actually implement Eve's semantics has been quite a quest and unfortunately the implementations of most of those languages don't really fit us. Fortunately, we've put some really interesting things together lately that have produced some very surprising performance numbers for us, so hopefully that's finally resolved and we can move on to how GALS and the like apply in our world. :)


GALS is indeed the obvious next step. That's how I'd do it.


ANI reminds me of HDLs [1] - I'm assuming that's the inspiration with terminology like "latch"?

Hardware is also concurrent by default. Coding some hardware logic will also change the way you approach coding. Anyone who's interested get an FPGA demo board and write some verilog or VHDL - I highly recommend it.

1. https://en.wikipedia.org/wiki/Hardware_description_language


Yes, I can't really understand why he didn't mention HDLs. I had never seen ANI before but both the syntax and terminology reminded me of VHDL.


These layers of abstraction seem to lead software bloggers to re-discovering concepts known by hardware engineers for decades.


Can anyone recommend a good course/book/tutorial for learning Verilog/VHDL? I have a demo board from a course I took in college and would love to try doing some projects with it, but I've had a hard time finding any good learning material.


Writing test benches in Icarus Verilog was quite helpful for me, along with pretty much everything at asic-world[0]. I haven't looked for a VHDL equivalent of Icarus, unfortunately.

Of course, always remember that "can be compiled" != "can be synthesized".

[0] http://www.asic-world.com/verilog/index.html


I was thinking this while reading, too. Concurrency is everywhere in hardware by default, and explicitly statesd with fork...join!


Honestly, I don't think that "concatenative" is a good term here. I prefer to call it a special case of combinator-oriented programming. You may see examples of this approach in vector languages like APL, FP and in functional world too (see Henderson's book, SICP and so on). The author of Joy (the language which spawned the whole "concatenative" activity) clearly was inspired by Backus's FP. Here we have stack combinators and we can emulate them even in Python, thanks to closures.

square = code(dup, op("*"))

Add a bit of syntactic sugar and you'll get "concatenative", or, better stack combinator-oriented language.

As for Prolog, I really appreciate that the author is calling it "declarative" not a "logic" language. It's more important to learn about backtracking and unification (powerful variant of pattern matching) than about something like Horn clauses. For anyone who wants to learn Prolog better (and it's worth it, Prolog is one of most beautiful PLs around!) I recommend this article: http://www.amzi.com/articles/prolog_under_the_hood.htm


I disagree with your second point. "Logic programming" is a much more accurate term for programing with backtracking and unification than "declarative." For example Haskell is a very declarative language, but it doesn't use backtracking at all!


I prefer not to call it "logic programming" from the point of learning the language. Too many textbooks start from explaining that "Socrates is a man" or talk about Robinson's resolution. And, as a result, student doesn't know what to do with this language.

Declarative is a vague term, but for me it means mostly a way to compactly describe your problem in domain-specific terms. There are some languages in which you can produce compact code, like APL or J. But declarative for me means readable too. And there are cases, when in Prolog one can write a more compact and readable code than in Haskell. In Prolog you can describe just the essence of the problem. Not always, of course.

Another interesting thing about Prolog that it's a small language. It means that it has only few internal parts that make it alive (The complexity of many Prolog implementations is a result of fighting for perfomance). I really like small languages (like Oberon or Forth, for example), because it's possible to learn how they work internally. And the knowledge of this inner working helps to understand the language better. There is nothing "logical" inside Prolog, just a few powerful imperative constructs.

The author of "Prolog Under the Hood An Honest Look" says:

"Prolog, billed as "logic programming", is not really. You may be disappointed if that's what you expected to find. On the other hand, having backtracking, unification, and recursion inside one computer language leads to something very powerful and special."

And I, personally, like Prolog terms very much!


Would like to mention concept-oriented programming [1] which is work in progress but has highly ambitious goals of changing the way we think of programming. The initial idea is to make references active and customized elements of the program which can intercept all accesses to the represented objects. A new programming construct, called concept (hence the name of the approach), describes both behavior of references and behavior of objects. Objects exist in a hierarchy modeled by the concept hierarchy. IS-IN relation is used instead of the traditional IS-A. Also, concept-oriented programming distinguishes between incoming and outgoing methods, that is, every method has two versions: for external access and for internal access. More papers in [2].

[1] https://arxiv.org/abs/1501.00720 [2] https://www.researchgate.net/profile/Alexandr_Savinov

Disclaimer: I am the author of concept-oriented programming and data model


How does your research interact with object-capability models of computation, as in the E programming language?


(I can be wrong but) as far as I understand E is based on objects and messages as primary mechanisms, that is, we do not know where objects exist, how they are managed and how messages are transferred. Concept-oriented programming (COP) is a references-first approach while objects are defined as being functions of references, that is, object is a derived (secondary) element of the model and the goal is to be able to have full control over this mechanism (custom heap, custom garbage collection, custom access control and security model etc.)

What I see similar in object-capability model and COP is that they both try to treat object access as a highly important part of a system behavior. In COP, I used to write that it is more important what happens during access rather than in the object itself. Hence, being able to make these intermediate actions integral part of the program (and develop good PL for that purpose) is very important.


Will Eve ever take off? I like the thoughts behind it.

https://youtu.be/VZQoAKJPbh8 very good talk about the background of eve. When he finally talks about eve you might want to switch to a more recent talk about it.


Eve[1] is in the process of becoming a lot more real than it has been up to this point. We've found a great model and discovered a way to build a high performance implementation of it, which means much of the foundational research is finally in place. Over the next couple of weeks, we'll be revamping our website, docs, etc to help people get started building real things with it. :)

There's going to be a lot of really exciting stuff coming over the next few months. We've gathered a set of ideas, evidence, and implementations that have certainly blown us away - we hope others will find it valuable too.

EDIT: I realized I didn't address your initial question. Fwiw, we just recently crossed a really big milestone in terms of usage - more than 40,000 people have now played around with Eve on http://play.witheve.com and we've learned a ton from that experience. Part of the website revamp will be making that workflow simpler and nicer. We have a surprisingly high conversion rate (> 30%), so hopefully we can help smooth out that experience and begin to grow the community more and more.

[1]: http://witheve.com


I can't load the site on my phone. Mind pointing out what makes Eve so special? I'm suspicious of us figuring out anything mind boggling new at this point and assumed Eve was all a gimmick (I hope to be proved wrong :)). Something like Red makes a lot of sense to me as that ahhh language as it is tiny with no install, can be used for high/low level coding, has excellent DSLs such as GUI builders, and can compile to a native binary to name a few things that kind of shocked me. Eve seems to be more like an online Smalltalk? It's always fascinating to hear new ideas and I wish the best for your project!


We've certainly taken a lot of inspiration from Smalltalk, but I think the semantics we've arrived at make a really nice programming environment, with some surprising properties you may not think are possible.

Eve has a similar philosophy to Red/Rebol - that programming is needlessly complex, and by fixing the model we can make the whole ordeal a lot nicer. We start with a uniform data model - everything in Eve is represented by records (key value pairs attached to a unique ID). This keeps the language small, both implementation-wise and in terms of the number of operators you need to learn.

Programs in Eve are made up of small blocks of code that compose automatically. In each block you query records and then specify transforms on those records. Blocks are short and declarative, and are reactive to changes in data so you don't worry about callbacks, caching, routing, etc.

Due to this design, we've reduced the error footprint of the language -- there are really only 3 kinds of errors you'll ever encounter, and those mostly relate to data missing or being in the wrong shape that you expect. What's more, we'll actually be able to catch most errors with the right tooling. You'll never experience your app crashing or errors related to undefined/nil values.

We've made sure your program is transparent and inspectable. If you want to monitor a value in the system, you can just write a query that displays it, as the program is running. I like to think of this as a "multimeter for code". You can do this for variables, memory, HTTP requests, the code itself ... since everything is represented as records, everything is inspectable.

Because at its core Eve is a database, we also store all the information necessary to track the provenance of values. This is something most languages can't do, because the information just isn't there. So for instance if a value is wrong, you can find out exactly how it was generated.

There's a lot more work to do, but we have big plans going forward. We plan to make the runtime distributed and topology agnostic, so you can potentially write a program that scales to millions of users without having to worry about provisioning servers or worrying about how your code scales.

We're also planning on multiple interfaces to the runtime. Right now we have a text syntax, but there's no reason we couldn't plug in a UI specific editor that integrates with text code. We've explored something like this already.

Anyway, those are future directions, but what we have right now is a set of semantics that allow for the expression of programs without a lot of the boilerplate and hassle you have to go through in traditional languages, and which provide some unique properties you won't find in any other language (at least none that I've used).


Thank you for these intriguing thoughts behind the development of Eve. This project was for me the most valuable find in this whole discussion. There are a number of fundamental design decisions in Eve that opened my eyes to a fresh rethinking of the underlying assumptions in existing languages.

The code examples demonstrate surprising simplicity in achieving features that would be complicated to implement in other languages. I'm convinced that Eve will influence how programmers think (at least it did for me) and promote development of languages/frameworks/libraries that adopt some of these ideas. Great work, will be following with interest.


Thanks for the long description! It is most appropriated and helped me get a feel. I agree wholeheartedly that programming needs to change. Best of luck! I'll be keeping an eye out on Eve.

Quick question: Will it always be online only?


I believe that Aurora (covered in the article) was some kind of pre-cursor to Eve, by the same Chris Granger.


It was indeed, though Eve is a much more grown up idea than Aurora ever was. It's amazing how little we really understood back then and how far we've come. :)



F-Script was a very interesting array programming / objects hybrid. Smalltalk meets APL is not far off.


Especially in something like kdb.


I hear this a lot, but am confused in that I can't find hardly any examples anywhere. It'd be nice if kx systems put some videos of people querying data and building charts...etc.



A nice demo, but still doesn't show examples of a customer going through an example dataset and building dashboards. The kx site is good at talking about all their customers without giving a whole lot of real info. I wish they had a 64-bit hobbyist license.


Check nsl.com and https://github.com/JohnEarnest/ok for playing around with examples. Ofcourse if you want all facilities, you'll need kx. You can play with 32 bit version.


I don't know about 64-bit but people in prior HN discussions said you can download a free one off their web site. You should be able to extrapolate some things from that.

EDIT: https://kx.com/download/


True, and thank you. I think it'd be nice if they had a video of a user on a sample system running queries and building displays. Yea I can play with the demo, but for that price I'd expect some promotional materials to be made.


A few random comments:

Forth is a great concatenative language, since its the pioneer in that area (I think), but Factor is definitely worth mentioning too as a "modern" take on the paradigm. It essentially tries to be a concatenative Lisp.

ANI was dead even in 2014 when this article was written (which the author acknowledges: "the language seems dead, but the concepts are pretty interesting"). It has some really interesting ideas, but since it never got implemented, I'm not sure how much use there is in discussing it here amongst real languages. It would be useful as a discussion for possible future languages for sure, but its currently still just a concept, so I'm not sure what practical thing you can learn from it right now.


ANI/Plaid reminded me of the LabVIEW visual dataflow language, which is quite widely used in the branch of physics I used to work in, for data acquisition and instrument control. While I've often longed for a text-based alternative that plays better with modern version control and my favorite text editors, I have to say that having everything laid out for you in a visual way does make it easier to reason about the execution flow. That is, after a few years of working with it --- initially, this paradigm shift was rather a painful stretch of the nerves.

If every language has its own specific dark patterns and bottlenecks, LabVIEW's is definitely the "brightly-colored spaghetti" structural breakdown of an advanced beginner's code :-)

Incidentally, why do we, as programmers, tend to focus on a language's bottlenecks so much, in such an emotionally charged way? Any psychology-of-programming people out here? You might consider LabVIEW an excellent case study in getting on people's nerves...


For some extra dependently typed fun, check out ATS and Dafny.

ATS is aimed at system programming, and if you think Idris has a steep learning curve, you'll need belaying for ATS. And, the language is horrible. But it's really mind expanding to see it in action.

Dafny is a really basic language with support for Hoare/Dijkstra verification. It's completely unlike the type system model.


I'd recommend to everybody (but especially those involved in either embedded systems or a lot of concurrent state) to try out HSM / state chart programming (note: this has basically nothing to do with "flat" FSMs). It's as close to a silver bullet as you'll ever get for these kinds of systems.

Stateflow or QP/QM, all other systems suck.


Is it useful in writing drivers ? or mostly for application code ?

And what are the drawbacks? Why aren't everybody using it?


It's used heavily in application code for automotive ECUs. It has code generators that generate output with a really, really, small footprint and zero runtime overhead; due to optimization at very high levels, this is on the level of really good LTO optimization.

Drawbacks: it's very un-agile; you really have to think the system through completely. (The magic being that if you do this, it is very likely correct by design.) It's not really feasible to specify a part of the system now and leave other parts open for later refinement. The other drawback being that no good non-commercial options exist.


>> It's not really feasible to specify a part of the system now and leave other parts open for later refinement.

Is there any work or ideas on how to solve that issue?

And so it's also hard to add features later, in next versions ?


Stuff like these. It overlaps with model-driven development where you work at a higher level in constrained way to knock out many issues. Then, it generates safe code from that which you also check with tests or other tools.

https://en.wikipedia.org/wiki/Stateflow

https://en.wikipedia.org/wiki/Simulink

Recent example from high-assurance security:

https://www.umsec.umn.edu/sites/www.umsec.umn.edu/files/hard...


Thought-provoking. The examples are all programming languages, but the paradigms themselves can apply on a smaller scale, i.e., for application features, as design patterns or inspiration.

The section on "symbolic programming" has me pondering still about potential implications. It makes me imagine something like a "visual" WYSIWYG editor, but a "conceptual" editor.. Looking forward to digging deeper via provided references.


It's strange to see ANI mentioned as if it were a real, working language. AFAIK it was never able to compile even the samples: https://code.google.com/archive/p/anic/issues/7


Wait. What exactly is esoteric about these paradigms? All the books I've read simply call them regular paradigms.


I guess "uncommon" is increasingly used as one of the meanings of "esoteric"


You're correct that I'd not consider these esoteric, just not procedural or OO. Esoteric would be something like Brainf*, the music programming language, or that art language (forget the name). Unusual but still commercial and useful paradigms are logic, array, functional, and concatenative to name a few.


Piet. The Mondrian language I'm guessing


Yes that was it.


Curious. ANI seems to me like an abstract form of graph-parallel programming, where the language itself is the scheduler.

There are some production-ready schedulers for GPP, like Intel's TBB[1] (C++), but learning to be effective with this requires a major shift in thinking about code - essentially thinking in graphs.

[1] - https://www.threadingbuildingblocks.org/tutorial-intel-tbb-f...


My team has been working on a Python library called Loman that represents computations as graphs. We've open-sourced it [1][2]. One of our aims is to make it as natural as possible to use graph-based programming, and within an already-familiar programming language. Be interested to know what you think.

[1] https://github.com/janusassetallocation/loman [2] http://loman.readthedocs.io/en/latest/user/intro.html


Can you demo your library with a more complex example, e.g. the Dining Philosophers Problem. Here[1] is the solution using TBB, and here[2] a more recent version - using a multioutput function node to optimize the flow.

[1] - https://software.intel.com/en-us/blogs/2011/01/10/using-the-...

[2] - https://software.intel.com/en-us/blogs/2011/09/13/using-inte...


Thanks for the links. I took a look, and I think that the intention is quite different between the libraries. Our library would not directly apply to the Dining Philosophers Problem. Both libraries use graphs to represent dependencies between tasks, but they do so for different reasons, and to cover different uses. The Intel library does it with the intention of scheduling a given workload. Our library uses a directed acyclic graph to track state as either the data or function for given nodes of the graph are exogenously updated, either interactively during research, or from new incoming data in a real-time system. We cover where we think our library is useful in more depth in the introduction section of our documentation[1].

[1] http://loman.readthedocs.io/en/latest/user/intro.html



Alan Kay talking about agent-oriented computing in 1990:

https://youtu.be/275FQ9koAw8?t=7647

Very interesting view, especially considering how long ago that was and how relevant they still are.


Some time after Java first came out, there was a product called Voyager from a Java products company called Objectspace [1]. Voyager was a product for creating and using mobile agents. I had downloaded the trial and tried it out a bit. It was quite cool. IIRC, Graham Glass, who was involved in ObjectSpace, was also the creator of Electric XML, an XML library, and was later CTO of WebObjects. Recently he had/has been working on EDU 2.0 (EDU20.org), an e-learning product company.

[1] They were also the creators of JGL, the Java Generics Library, which was like a Java version of the C++ STL, and done before Java got generics natively.


That's pretty good, hadn't seen that clip. I think the pervasive networking mentioned has enabled the paradigm but wasn't quite the driver Kay thought it would be. RPCs (of some type) over HTTP have won over having mobile code. On the other hand, I still see a big upside in using agents for their deployment and component (in the divisible whole sense) organization of software.


Telescript and Obliq come to mind.


The Telescript reference I understand since that was General Magic's goal with Magic Cap, but I'm not sure I see how Obliq is an agent oriented programming language. It seems to be a object oriented one.


I could be misremembering but thought it was a distributed one. The book that I learned agent-oriented systens from gave it as example platform that might support mobile agents. Safe Tcl and Java applets being the other two it discussed.


  > Dependent types
  > 
  > Example languages: Idris, Agda, Coq
  > 
  > You’re probably used to type systems in languages like C and Java,
  > where the compiler can check that a variable is an integer, list, or string.
  > But what if your compiler could check that a variable is “a positive integer”,
  > “a list of length 2”, or “a string that is a palindrome”?
This is what I love about SQL. You can even define your own types, like "email", at least in PostgreSQL:

  create table contacts (
    name text not null,
    age int check (age >= 0),
    email email
  );
It already has a few of these special types built in, like for IP and MAC addresses (https://www.postgresql.org/docs/current/static/datatype.html).


The author got the gist of dependent types wrong. It's not simply about checks. Dependently typed languages allow the result type to explicitly depend on the input data.

Thus, a dependently typed database would allow one columns type to depend on the value of another column.

It would be like saying

  Create table dep ( name text, fieldType int not null, fieldValue (if fieldType = 1 then int else varchar))
Notice the 'if' in the type of fieldValue


Out of interest, how is common ORM support for custom Postgres types?


GeoAlchemy for PostGIS/SQLAlchemy is a good example of explicit support. But typically, you can always "deal" with custom types in your ORM, even if that means falling back to binary data.


Shame not to see factor in the concatenative list, it addresses some of the pain points there with locals


Postscript would also have been a fine addition and has its own methods of dealing with locals. https://www.adobe.com/products/postscript/pdfs/PLRM.pdf [PDF]


What a sad world we live in when Adobe publishes a paper about PostScript in PDF (which is just PostScript without the programming language).

To help offset the irony and celebrate Turing completeness, here is a PostScript interpreter implemented in PostScript, and an explanation of why.

http://www.donhopkins.com/home/archive/cyber/litecyber/ps.ps...

http://www.donhopkins.com/home/archive/cyber/litecyber/ps.ps...

http://www.donhopkins.com/drupal/node/97


I have also seen _persistent by default_ where every variable is by default store in a database and automatically initialized when you come back to that piece of code. Useful for web development. (Sorry, forgot the name of the programming language)


I'd agree that "persistent by default" is an unusual and interesting approach, that would be suitable for the list of paradigms that may "change the way you code".

It reminds me of a statement I read about managing application state, to treat the state (in this case a Redux store) as an "in-memory database". Add a layer to load/persist automatically - via LocalStorage, WebSocket, etc. - and it would be persistent by default. I suppose you wouldn't want everything persistent though, just a relevant slice of state.

Here's an article about "persistent languages", which includes discussion on related features. http://wiki.c2.com/?PersistentLanguage


This classification of "paradigms" is a bit off.

First, declarative programming is a generic name which includes a broad range of paradigms - from functional to logic programming. Logic programming is something that deserves a special mention and discussion, because there are a number of interesting and unique concepts that deserve a more in-depth explanation.

Second, "dependent types" is better understood as a feature of a language (or better yet, of a type system) than a paradigm by itself.

Some of the other "paradigms" also seem more like characteristics of languages, and not really something that structures the way solutions are expressed/understood.


This. There are three (and arguably only three) common programming paradigms: imperative programming, functional programming, and logic programming with lineages back to Turing machines, lambda calculus, and formal logic / proof search, respectively. Languages can be broken down along other dimensions, but usually the term "paradigm" is reserved for the sort of irreducible foundation of a language, and these seem to be the three useful ones.

Edit: to put a slightly finer point on it, this irreducible foundation mostly has to do with how the language "computes." Imperative languages compute via statements that modify program state. Functional languages compute via proof reduction to normal form. Logic languages compute via proof search.


There's no reason why you couldn't integrate functional programming and imperative programming into a single language – for example, an imperative layer (for I/O) around a functional, lambda-calculus based core (for computations). Now what is the "irreducible foundation of [this] language"?


There should be a category for analogy based languages. Rail or the Billiard Ball Machine, which looks like Feynman diagrams on a pool table, are examples of this.

https://esolangs.org/wiki/Rail https://esolangs.org/wiki/Billiard_ball_machine


I'm kind of uncomfortable calling "declarative" a paradigm; it's a broad (and not binary) feature. Prolog and SQL are, respectively, the leading examplars of the logic and relational paradigms. They are both fairly declarative (but then, in the age of optimizing compilers, even C is somewhat declarative: your code constrains the result but it doesn't dictate the how as much as it seems to.)


I would add pictorial programming languages, such as ProGraph if you want a really unusual way to program. Object oriented, but inherently dataflow. Loops were very weird though.

https://en.wikipedia.org/wiki/Prograph


ParaSail[0] is another implicitly parallel language. It has similar goals to Rust and Spark Ada.

[0] https://forge.open-do.org/plugins/moinmoin/parasail/


On concatenative languages, I would like to add Piet[1] as a contender. Plus Piet program could look like 8-bit art (to me).

[1] http://www.dangermouse.net/esoteric/piet.html


I once read about Pieter Hintjens' model oriented programming [1] which the idea I have yet to understand. Hope someone can provide more insight on this subject. HN discussion [2]

[1] https://github.com/imatix/gsl [2] https://news.ycombinator.com/item?id=11558007


Re: "Dependent Types"

In Python, PyContracts supports runtime type-checking and value constraints/assertions (as @contract decorators, annotations, and docstrings).

https://andreacensi.github.io/contracts/

Unfortunately, there's yet no unifying syntax between PyContracts and the newer python type annotations which MyPy checks at compile-type.

https://github.com/python/typeshed

What does it mean for types to be "a first class member of" a programming language?


Concurrency by default feels a bit like the underlying processor of the machine, what with superscalar architectures and all.


Hardware Description Languages tend to be concurrent by default, with specific syntax for sequential logic.

A parallel pair of AND gates is physically concurrent, let alone anything with a clock.


Great thought provoking article. Good explanation of Dependent Types, Title is misleading though, should be called - Particular or Specfic Types. Postgres has the same capability - Domains - always surprised that other db's such as sql server have never adopted this useful feature.


There seems to be and old discussion over here https://news.ycombinator.com/item?id=7565153


Aurora looks interesting, but it seems to be .Net/windows only?


That must be a different Aurora. The one mentioned here was never really released, only demo'd at Strange Loop. We subsequently went on to turn that into Eve[1].

[1]: http://witheve.com


Thanks for confirming that. This looks very interesting. How active is development of the mathematical side of things? Also have graphics capabilities been developed yet or are they far away? The HN article indicated Aurora now Eve would have these capabilities. I'm interested because of the similarities I see between Eve and other literate programming approaches such as the Jupyter project, Mathematica, Sage notebooks, R motebooks. Eve looks like it will soon be joining that group.


Recent development has been focused on shoring up the semantics of the language, making the runtime more performant, and packaging our syntax and editor to make it more usable for a wider audience (i.e. people outside our office). Efforts relating to a UI that is approachable from a non-programmer perspective have been waylaid until we produce something that is usable and comfortable for at least the programmer crowd.


Is there any chance that you'd ever open-source the original Aurora demo? Seems like it would be a great foundation for further exploration in this space.


I recently submitted a link about "Robust First Computing". It didn't get any response, but I'll repeat the link and description here, since it's certainly esoteric, but has some extremely important properties.

Robust-First Computing: Distributed City Generation https://www.youtube.com/watch?v=XkSXERxucPc

A Movable Feast Machine [1] is a "Robust First" asynchronous distributed fault tolerant cellular-automata-like computer architecture.

The video "Distributed City Generation" [2] demonstrates how you can program a set of Movable Feast Machine rules that build a self-healing city that fills all available space with urban sprawl, and even repairs itself after disasters!

The paper "Local Routing in a new Indefinitely Scalable Architecture" [2] by Trent Small explains how those rules work, how the city streets adaptively learn how to route the cars to nearby buildings they desire to find, and illustrated the advantages of "Robust First" computing:

Abstract: Local routing is a problem which most of us face on a daily basis as we move around the cities we live in. This study proposes several routing methods based on road signs in a procedurally generated city which does not assume knowledge of global city structure and shows its overall efficiency in a variety of dense city environments. We show that techniques such as Intersection-Canalization allow for this method to be feasible for routing information arbitrarily on an architecture with limited resources.

This talk "Robust-first computing: Demon Horde Sort" [4] by Dave Ackley describes an inherently robust sorting machine, like "sorting gas", implemented with the open source Movable Feast Machine simulator, available on github [5].

A Movable Feast Machine is similar in many ways to traditional cellular automata, except for a few important differences that are necessary for infinitely scalable, robust first computing.

First, the rules are applied to cells in random order, instead of all at once sequentially (which requires double buffering). Many rule application events may execute in parallel, as long as their "light cones" or cells visible to the executing rules do not overlap.

Second, the "light cone" of a rules, aka the "neighborhood" in cellular automata terms, is larger than typical cellular automata, so the rule can see other cells several steps away.

Third, the rules have write access to all of the cells in the light cone, not just the one in the center like cellular automata rules. So they can swap cells around to enable mobile machines, which is quite difficult in cellular automata rules like John von Neumann's classic 29 state CA. [6] [7]

Forth, diffusion is built in. A rule may move the particle to another empty cell, or swap it with another particle in a different cell. And most rules automatically move the particle into a randomly chosen adjacent cell, by default. So the particles behave like gas moving with brownian motion, unless biased by "smart" rules like Maxwell's Demon, like the "sorting gas" described in the Demon Hoard Sort video.

In this video "Robust-first computing: Announcing ULAM at ECAL 2015" [8], David Ackley explains why "Robust First" computing and computing architectures like Movable Feast Machines are so incredibly important for scaling up incredibly parallel hardware.

I think this is incredibly important stuff in the long term, because we've hit the wall with determinism, and the demos are so mind blowing and visually breathtaking, that I want to try programming some of my own Movable Feast Machine systems!

[1] http://movablefeastmachine.org/

[2] https://www.youtube.com/watch?v=XkSXERxucPc

[3] http://www.cs.unm.edu/~ackley/papers/paper_tsmall1_11_24.pdf

[4] https://www.youtube.com/watch?v=helScS3coAE

[5] https://github.com/DaveAckley/MFM

[6] https://en.wikipedia.org/wiki/Von_Neumann_cellular_automaton

[7] https://en.wikipedia.org/wiki/Von_Neumann_universal_construc...

[8] https://www.youtube.com/watch?v=aR7o8GPgSLk


Isn't Lua supposed to be an example of a concatenative language?


Not at all.

The definition I like to use of concatenative language is:

If "X Y" is a legal program, then "X" is a list of tokens and a legal program and "Y" is a list of tokens and a legal program, and semantically, executing "X Y" is equivalent to executing "X" and then executing "Y".

practical implementations often deviate a little from this ideal.


Not sure where I had it from, probably some other language and I got it mixed up with Lua in my head.


Not really, the C API is stack based but it's concatenative to the degree that a C program uses a stack + heap.


Do you mean Tcl?


I believe HN title convention is to remove the number of list items ex. this title should be just "Programming paradigms that will change how you think about coding".


You're right, and “... that will change how you think about coding” is clickbait. Luckily the article contains an adjective for that, which we've used in the title. Thanks!


I realise moderation is tough an never pleases everyone, but have to say I'm disappointed here: in some circles, "esoteric" carries some quite strong negative connotations, which I think are unwarranted here.

Some of these languages are definitely suitable for "serious" usage. And I'm not sure I'd count SQL (or Prolog, for that matter) as esoteric at all!


  > "esoteric" carries some quite strong negative connotations
What? It didn't strike me that way. (And for what's it's worth, I'm a native English speaker; I was a communication major, an English minor, and a technical writer; since college I have continued to dabble in writing and linguistics for over 15 years; and as you can see I even know how to use a semicolon. :)

Had they said "eccentric," maybe I would have agreed. Esoteric, to me, has more the connotation of a secret society of sages.

Anyway, the article itself uses "esoteric."

By they way, I'm so glad that they declickbaited the headline. Thank you, editor!


That's a good point. We'll happily update the title again if someone can suggest a better one still using the author's language.


The suggestion from 'pitaj' that started this comment thread was perfect.


> "... that will change how you think about coding” is clickbait

Nonsense. The article's premise is explicitly that the listed paradigms will change how you think about coding. The author may or may not be correct about that, but the original title was not random hyperbole to entice a click - it was an accurate description of the article's premise and content.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: