Hacker News new | past | comments | ask | show | jobs | submit login
The Idea of Lisp (dev.to)
488 points by rbanffy on Dec 16, 2016 | hide | past | favorite | 340 comments



This article has many misstatements in its first half.

> John McCarthy wrote 6 easy things in machine code, then combined them to make a programming language.

John McCarthy didn't implement Lisp in machine code. Steve Russell did. Implementing Lisp properly in machine code is not easy; you have to write a garbage collector. To do that in the early 60s, you had to first invent garbage collection. Lisp was and is brilliant, but not as easily bootstrappable as this makes it out to be.

> It's not obvious that these six things are computationally complete (AKA Turing Complete).

`lambda` and function application alone are Turing-complete, as McCarthy would have known. The credit here belongs with Turing and Church, not McCarthy. `atom`, `cons`, `car` and all the rest are just icing on the cake of the lambda calculus when it comes to computability.

> All other meaning can be defined in terms of them.

Yes, and you can build everything on top of the SK combinator calculus if you like, but that doesn't make it a good idea. Lisp is surprisingly practical given how few core constructs it has, but real Lisp implementations have always added more primitives (eg. numbers and addition) for reasons of practicality.

> The language was defined in terms of itself as an interpreter. This is a proof by construction that the language is computationally complete.

No, it isn't. To prove Turing-completeness you need to show that you are as powerful as Turing machines. To do this it suffices to show that you can interpret a language already known to be Turing-complete. Showing you can interpret yourself does not suffice. It's easy to define a language which can do nothing useful except interpret itself, for example. (See also wyager's comment.)

> Well, Lisp is defined as an interpreter in terms of itself from the get-go, just like a Universal Turing Machine.

No. Defining a language only in itself is nonsense, for exactly the reason given above: it means nothing yet! It's like writing in a dictionary:

   qyzzyghlm, v. intr. To qyzzyghlm.
It explains nothing unless you already understand it!

> Lisp is a universal language because it can interpret its own code. While you can certainly write a JavaScript interpreter in JavaScript, none of the work is done for you.

Almost none of the work is done for you in Lisp either. The core of Lisp is just a relatively easy language to implement, while Javascript is a difficult one. Lisp is easy to implement because it has simple syntax (s-expressions) and few core constructs. The only thing that is special about implementing Lisp in Lisp is that Lisp uses s-expressions as its core data structure, so you don't have to invent an AST representation. The article, to its credit, explores this idea later.


«`lambda` and function application alone are Turing-complete, as McCarthy would have known. The credit here belongs with Turing and Church, not McCarthy»

I have sometimes pedantically quibbled with people that it is properly named the Church-Turing Theorem (of Computation) and that that distinction of including both esteemed mathematicians helps to push people towards the realization of how groundbreaking that effort was (two mathematicians separated by an ocean and using vastly different approaches to the mathematics of computation realizing how intricately linked they were), and also how the Lambda Calculus truly was integral in the definition of "Turing-complete".

Without Church's work on the Lambda calculus and without the Lambda calculus being so very different from the Turing "imperative" model of machine, we'd probably have a much narrower view of what "Turing-complete" even means, if we even had the concept at all.


> `lambda` and function application alone are Turing-complete, as McCarthy would have known. The credit here belongs with Turing and Church, not McCarthy. `atom`, `cons`, `car` and all the rest are just icing on the cake of the lambda calculus when it comes to computability.

This point reminds me of something interesting that I noticed recently despite having been familiar with basic LISP ideas for a long time.

If you carefully read McCarthy's original presentation of LISP, you can see that he seemed to be influenced at least as much by Kurt Gödel's work as by Alonzo Church's work.

In Church's lambda calculus, functions are abstract values that can only be used by means of function application. LISP evolved toward this style over time, but in the early history of LISP, functions were not abstract values but were represented by encoding into S-expressions. This encoding process resembles Gödel numbering.

Also, recursion arises in a different form in lambda calculus than it does in the approach to computability that is based on general recursive functions. Again, I think that LISP is closer to Gödel here than to Church.


> Also, recursion arises in a different form in lambda calculus than it does in the approach to computability that is based on general recursive functions. Again, I think that LISP is closer to Gödel here than to Church.

I think I understand most of the rest of your post, but I got lost here. Could you describe what difference you see here, and how it applies to LISP?


When I mentioned "general recursive functions", I was thinking about systems that define functions using a system of functional equations rather than using a lambda term.

Example:

    odd 0 = False
    odd n = even (n - 1)
    even 0 = True
    even n = odd (n - 1)
I think "rewriting system" is something similar. (I'm not an expert in this stuff so I'm providing hints for further reading more than anything else).

In these systems, recursion is "built in" to the formal language itself.

On the other hand, lambda calculus does not have recursion "built in". All variables are bound by function application so it is not possible to have recursive bindings like you see in these recursive function systems. You can define recursive functions using mechanisms like the Y combinator but these work by reconstructing something over and over as the computation evolves. It's different.

In McCarthy's paper, there is a system of toplevel recursive functions and there is a LABELS form that behaves in a similar way. Lambda calculus has neither of these.

One resource that describes recursive function systems of the kind I'm thinking of is the book "Lambda-Calculus and Combinators, an Introduction" by Hindley and Seldin (especially chapter 4).


The article is called "the idea of Lisp" conveying the broad concepts. It's mean as something that can be digested as a newsletter or an email. He even mentions his email list in the article.

I don't think it was meant to be an encyclopedic reference on the language.

I think its odd that you spent that much effort tearing it apart. I for one enjoyed reading it while I was waiting for a coffee.

Who actually wrote the machine code is arcana that's lost to the dust bin of history anyway. What isn't lost are the high level ideas presented here.


I don't like that you're being downvoted. You're making a fair point. I do like the second half of the article, for the record.

I think that attribution matters; albeit probably more the attribution to Church and Turing than to Steve Russell over McCarthy. It matters because if someone finds the ideas the article brings up interesting and wants to dig into them, they should know where to turn. The history of ideas informs future ideas to come.

I also think understanding the details of high-level ideas matters. The misconceptions about self-interpretation, for example, are quite deep: no language can ultimately be defined in terms of itself, but always in terms of something lower-level. Chase this thread far enough and you start studying transistors; or, in another direction, Goedel's incompleteness theorems.


Garbage collection is not necessary for lisp. Garbage collection only provides the illusion of infinite memory. Just like malloc/free.


And the implementation of garbage collection was, on at least two occasions, postponed. Once in the first implementation [1] and a second time in the early MIT Lisp Machines [2] (you just ran the machine until you ran out of memory which could take days or weeks, after which you saved the world to disk and rebooted).

[1] http://www-formal.stanford.edu/jmc/history/lisp/node3.html

[2] https://www.csee.umbc.edu/courses/331/resources/papers/Evolu...


"saving the world to disk and rebooting" sounds like a primitive form of garbage collection to me :)


It is. From that second link: "Such copying back and forth to disk was equivalent to a slow, manually triggered copying garbage collector."


McCarthy's writing style is highly entertaining. Although I still have no idea what "Pornographic Programming" is...


I suspect it has something to do with bondage and discipline.

"... decisions ... later proved unfortunate. These included ... the use of the number zero to denote the empty list NIL and the truth value false. Besides encouraging pornographic programming, giving a special interpretation to the address 0 has caused difficulties in all subsequent implementations." -John McCarthy

Maybe he considered NIL to be his own billion dollar mistake.

Tony Hoare / Historically Bad Ideas: "Null References: The Billion Dollar Mistake" [1]

"Abstract: I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years. In recent years, a number of program analysers like PREfix and PREfast in Microsoft have been used to check references, and give warnings if there is a risk they may be non-null. More recent programming languages like Spec# have introduced declarations for non-null references. This is the solution, which I rejected in 1965." -Tony Hoare

[1] http://lambda-the-ultimate.org/node/3186


I suspect it was more that ()-is-nil-is-false-is-0 is entirely visible to the programmer ("see through") rather than an implementation detail hidden by the language.

I still need the occasional reminder myself that nil is () is nil when writing Common Lisp code, lest I do un-idiomatic things like set a slot's initial value to '().


I'm only guessing, but it seems like he means "impure functions", e.g., side effects.


It's difficult to use malloc/free memory management in a language that supports closures, since ownership becomes very hard to keep track of statically.


I assure you that "malloc" and "free" don't "provide the illusion of infinite memory". Quite the opposite, in fact.


In practice, many programs ignore the fact that malloc can return NULL. As do some OSes and their implementations of malloc, if they support/enable/require overcommit. These are perhaps operating under "the illusion of infinite memory" (in the GC sense): free, in this context, is simply a way of marking data as invalid and no longer to be referenced - a method of poisoning data for debug purposes.

But of course, I've had malloc return NULL - very finite.


They do if always matched 1:1, it doesn't usually happen, specially in big codebases, thus leading to CVEs.


They surely do not. Malloc is specified in such a way such that the request for memory can fail (which leads to returning NULL).


The specification allows for this, yes. However, on some platforms (including linux glibc by default, I believe), malloc() never fails, but allocates virtual memory optimistically; the first you hear of an out of memory condition is when the system slows down due to paging, and the next thing you notice is when the OOM killer nixes a process.

Of course, other platforms, especially embedded ones, behave differently.


Actually there is one reason for malloc to return NULL even with virtual memory, your process can run out of address space.


Or run out of room in the paging file. Your addressable memory cannot be larger than physical memory without a backing store.


Depending on VM_OVERCOMMIT_MEMORY, Linux might give out address space well beyond the size of the pagefile, hoping many of those pages are never written to (e.g., most threads never get anywhere near the bottom of their default stacks).


Assuming that the OS APIs used by malloc() do tell the application about OOM.


Challenge: Write a Turing complete machine using a finite number of registers and a state machine. One or all of the registers can contain a rational number of unlimited precision. (This has already been done, so if you are aware of the existing machines, you have to create a new one.)


This...is not possible?


No, it's possible. The key insight is that a "rational number of unlimited precision" can be used to store an arbitrarily large amount of data; it can emulate a Turing machine's unlimited working tape.


You win a stcredzero "no-prize!"


Some simple problems solved in simple can burn through gigabytes of allocations. Without GC, you are really crippled. Not to mention event processing loops (servers and such) which run indefinitely.


The story as I have heard it before is that John McCarthy wrote a Lisp interpreter in Lisp and Steve Russell translated it directly into machine code to get a working interpreter. Wikipedia indicates that Steve Russell implemented Lisp in machine code twice but gives no details.

Do you have a source giving additional details of what happened?


Here's an account by McCarthy of the early history: http://www-formal.stanford.edu/jmc/history/lisp/lisp.html


That is the account that I had read before. It only talks about Steve Russell's first implementation and makes it sound like a mechanical transform into reality of a theoretical implementation.

However Wikipedia says that there is a second, and the person that I was responding to indicated that Steve Russell had to create a garbage collector to make it work. Those are things that I had not previously heard, and I'm curious about.


I have no reason to believe that Russell wrote a garbage collector for the first Lisp implementations. I used the adverb "properly" specifically because it's possible to implement Lisp without GC, but this isn't really a viable approach long-term. So to make Lisp practical, significant work on GC had to be (and was) done.


My understanding is that the first version of Lisp with true GC was Scheme. Which was, not coincidentally, the first version of Lisp with lexical scope and closures.

The first implementation of Lisp used reference counting.


BIBOP is the dynamically expandable version of MACLISP, the SAIL standard MACLISP. Essentially, the main advantage of BIBOP is that whenever one of the expandable spaces runs out of space, BIBOP requests a larger core allocation from the monitor and the delinquent space grows in the allocated memory.

December 1973; updated March 1974

The (in)famous "Bibop" (pronounced "bee-bop") LISP scheme has been available for some time now and seems to be more or less reliable. Bibop means "BIg Bag Of Pages", a reference to the method of memory management used to take advantage of the memory paging features of ITS. The average LISP user should not be greatly affected in converting to this new LISP (which very eventually will become the standard LISP, say in a few months).

http://www.saildart.org/BIBOP.RPG[UP,DOC]


That's strange.

McCarthy describes a mark&sweep GC.

http://www-formal.stanford.edu/jmc/recursive/node4.html


I am confused by your comment, reference counting is a form of GC. Could you elaborate?


Often reference counting is considered distinct from GC; sometimes the term "tracing GC" is used to disambiguate. Refcounting & tracing GC have the same purpose, but different implementation techniques and performance implications; and perhaps more importantly, reference counting can't collect objects which cyclically reference one another.


That makes sense. I guess this is also why CPython uses reference counting as well as mark and sweep?


I believe so, yes.


> John McCarthy didn't implement Lisp in machine code.

McCarthy also didn't expect S-expressions to be a concrete form. He expected everybody would write in M-expressions.

The fact that S-expressions worked as the language itself is again due to Steve Russell's insight.


The problem was that M-expressions are just really silly way of spelling S-expressions, almost node for node, atom for atom. Whatever disadvantages we agree S-expressions have, M-expressions have all of them, and then some: like extra noise due to punctuation.

What are M-expressions? Basically this:

   (a b c)                          (quote (a b c))
   car[x]                           (car x)
   car[append[(a b c); (d e f)]]    (car (append (quote (a b c)) (quote (d e f))))
Parentheses implicitly quote. Square brackets are non-quoting, but the first element goes outside. Inside square brackets, pointlessly superfluous semicolons appear.

S-expressions developed the ' notation for quoting, almost eliminating the slight advantage of M-expressions there:

   (a b c)                          '(a b c)
   car[x]                           (car x)
   car[append[(a b c); (d e f)]]    (car (append '(a b c) '(d e f))))
The apostrophe notation is smarter; I don't see a way in M-exps to quote an individual atom. Though M-exps give us this:

   (x)                              (quote (x))
if we want (quote x) it looks as if we have to write:

   quote[x]                         'x
whose lameness is self-evident.

I think MacCarthy was exploiting M-expressions for the way that most trivial instances of them look different from S-expressions, which was a useful distinction to have in the papers about Lisp. With an M-exp you could whack the reader over the head and say "hey reader, this is character syntax: what you enter into the machine" and then with S-exps you say "this is a textual picture of internal structure".

There are other ways for that though, like different fonts, or upper/lower case and whatnot, or some REPL convention where we show the prompt in front of what is typed in, plus the "result arrow" and such.


I think that M-expressions are quite interesting but I'll just make a few concrete points here.

> if we want (quote x) it looks as if we have to write: ... quote[x]

I think that you could just write X in that case. Uppercase text was for symbolic atoms.

Also, you didn't mention conditionals:

M-expressions:

    ff[x] = [atom[x] → x; T → ff[car[x]]]
S-expressions:

    (defun ff (x) (cond ((atom x) x) (T (ff (car x)))))
The semicolon and the infix arrow have the benefit of reducing the need for parentheses in that case. And you can avoid writing "defun" and "cond".


Another interesting aspect of M-expressions is how much like Mathematica/Wolfram they look.


IIRC in M-expressions, uppercase meant an atom. You'd write X instead of quote[x]. QUOTE was introduced only for the eval function working on the S-expression form.


> Implementing Lisp properly in machine code is not easy; you have to write a garbage collector.

That's not actually true: you could, instead, just fill up memory and crash when you're out. It's not ideal, but it does work.


That might not meet some expectations behind the word "properly".


Such expectations would have little to do with Lisp. Even one with a garbage collector can run out of memory. How quickly that happens and how it handles it is just a matter of how good of a garbage collector you have.


Nobody in this thread has claimed that program's reachability graph always magically fit into available RAM, if only a garbage collector is present.

> How quickly that happens and how it handles it is just a matter of how good of a garbage collector you have.

It depends on the object size, how well objects are packed, and how much memory you have. Waste cannot be better than zero, so we have an upper bound on hitting OOM.

The lower bound (under non-compacting allocation) depends on the size ratio between the smallest and largest object. This is explored in paper by J.M. Robson (JACM 18, 416-423, [1971]) where Robson shows (IIRC, using only power of two block sizes, but arguing that the result is general) that an allocator client can be so contrived as to bring about the worst case whereby the heap consists only of the smallest objects, spaced apart just close enough that a request for the largest object can then be made, which fails. The memory utilization at that OOM moment is then the ratio between the size of the smallest and largest object. Robson's important argument is that an allocator has no way to defend against this problem. Regardless of its strategy for placing requests, the client can be contrived to tickle the worst case.

Fragmentation is not necessarily GC's fault; some objects just can't be moved. Objects which are opaque memory managed by a foreign library API pose an intractable problem in this area. There are good rationales for a non-compacting GC; non-compacting doesn't make it "bad".

Without GC, there is no lower bound on the amount of reachable data which exists under OOM. A tight loop which repeatedly calls cons will hit OOM.


Moreover, a Universal Turing Machine is not described in itself. The classic one (read-write head with infinite tape) is certainly not described using a read-write head with infinite tape; it's described in plain language plus math notation.

If you write a JS interpreter in JS, a heck of a lot of work is in fact done for you.

That's why every teenager and his dog has a transpiler from something (resembling JS or not) to JavaScript.


How many primitives does it take to build a LISP machine? Is it ten, seven or five? (And how many did McCarthy use?)

http://stackoverflow.com/questions/3482389/how-many-primitiv...


None of the above. Consider that symbols are objects which have a name property, which is a character string. None of the cited primitives can construct a character string and intern a symbol in that name.


The conditional expression or more specifically everything being an expression is my favorite thing about Lisp. I did not know that McCarthy pushed to add it to Algol which apparently today is the ternary operator for most languages.

It is annoying that so many languages (C, Java, C#, etc) have both a conditional statement (if-else) and conditional expression (ternary ?:). Really the if-else should be an expression (I think the ternary operator is hideous).


Expressions are limited, because they can only return one result. In stack based languages like Forth and PostScript, any function can take and return any number of parameters. In fact they can decide at runtime how many to take and return.

PostScript is a lot like Lisp in that it's purely and simply homoiconic: PostScript code is just normal PostScript data. The "ifelse" operator takes a boolean and two expressions (executable PostScript polymorphic arrays, or any other PostScript object, executable or not -- non-executable objects are just pushed onto the stack), and executes one or the other depending on the value of the boolean parameter.

I think Lisp's multiple-value-bind is an inelegant hack, compared to the simplicity of PostScript.


PS seems like a nice language. Such a shame it's stuck on the printer: it seems like it would be a good alternative to FORTH in less memory-constrained environments.


NeWS is a Network extensible Window System developed by James Gosling at Sun in the 80's, which used PostScript not just to draw on the screen (like NeXT), but also (unlike NeXT but like AJAX) to implement entire window managers, user interface toolkits, applications, and intelligent responsive front-ends to networked applications, all in the NeWS window server (like a web browser).

NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:

+ used PostScript code instead of JavaScript for programming.

+ used PostScript graphics instead of DHTML and CSS for rendering.

+ used PostScript data instead of XML and JSON for data representation.

http://www.donhopkins.com/drupal/node/97

See the PSIBER and PizzaTool code I posted in the message above for an example of what was possible with NeWS!


Yes, NeWS was cool. But NeWS no longer exists, sadly.


Almost all FP languages typically have a solutions for this... it is called a tuple. With Lisp it is a list or cons cell.

ML languages have true tuples and I have to say is superior to output function variables and allows pattern matching.

Your the first I have seen to ever give a compliment to PostScript the language. I'll have to relook at Postscript (and other stack based languages).


I love PostScript! I haven't written (or stroked) a line of it in years, but it's still fun to think in. I used to do a lot of Forth programming before that, but PostScript is a lot higher level and Lispier (and NeWS's object oriented programming system was very Smalltalky).

Here's a metacircular PostScript interpreter:

http://donhopkins.com/home/archive/NeWS/ps.ps.txt

Also check out Glenn Reid's PostScript Distillery, a partial evaluator for PostScript programs that reads in an input PostScript program that draws text and graphics to print a document, and it partially evaluates it against the PostScript stencil/paint imaging model, and then writes out another canonical output PostScript program that draws the exact same document, but optimized, all in the same default user coordinate system, with redundant graphics state changes removed.

Distillery was the initial idea and working proof of concept that led to PDF, Adobe Acrobat and its Distiller which converts PostScript to PDF.

Of course if the input PostScript program that draws the document is procedural and has loops or recursion, the optimized output program has all the loops unwound and may actually be much larger! But the whole point of PDF and the Distiller is to strip out the programming language parts of PostScript and just represent the effective drawing commands.

http://donhopkins.com/home/archive/postscript/newerstill.ps....

And here's a paper about a visual PostScript programming and debugging environment for NeWS, which discusses the metacircular evaluator and PostScript distillery:

The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines - October 1989: http://www.donhopkins.com/drupal/node/97

The Metacircular Postscript Interpreter

A program that interprets the language it is written in is said to be "metacircular". [Abelson, Structure and Interpretation of Computer Programs] Since PostScript, like Scheme, is a simple yet powerful language, with procedures as first class data structures, implementing "ps.ps", a metacircular PostScript interpreter, turned out to be straightforward (or drawrofthgiarts, with respect to the syntax). A metacircular PostScript interpreter should be compatible with the "exec" operator (modulo bugs and limitations). Some of the key ideas came from Crispin Goswell's PostScript implementation. [Goswell, An Implementation of PostScript]

The metacircular interpreter can be used as a debugging tool, to trace and single step through the execution of PostScript instructions. It calls a trace function before each instruction, that you can redefine to trace the execution in any way. One useful trace function animates the graphical stack on the PSIBER Space Deck step by step.

The meta-execution stack is a PostScript array, into which the metacircular interpreter pushes continuations for control structures. (forall, loop, stopped, etc...) A continuation is represented as a dictionary in which the state needed by the control structure is stored (plus some other information to help with debugging).

It is written in such a way that it can interpret itself: It has its own meta-execution stack to store the program's state, and it stashes its own state on the execution stack of the interpreter that's interpreting it, so the meta-interpreter's state does not get in the way of the program it's interpreting.

It is possible to experiment with modifications and extensions to PostScript, by revectoring functions and operators, and modifying the metacircular interpreter.

The metacircular interpreter can serve as a basis for PostScript algorithm animation. One very simple animation is a two dimensional plot of the operand stack depth (x), against the execution stack depth (y), over time.

Printing Distilled PostScript

The data structure displays (including those of the Pseudo Scientific Visualizer, described below) can be printed on a PostScript printer by capturing the drawing commands in a file.

Glenn Reid's "Distillery" program is a PostScript optimizer, that executes a page description, and (in most cases) produces another smaller, more efficient PostScript program, that prints the same image. [Reid, The Distillery] The trick is to redefine the path consuming operators, like fill, stroke, and show, so they write out the path in device space, and incremental changes to the graphics state. Even though the program that computes the display may be quite complicated, the distilled graphical output is very simple and low level, with all the loops unrolled.

The NeWS distillery uses the same basic technique as Glenn Reid's Distillery, but it is much simpler, does not optimize as much, and is not as complete.

PSIBER source: http://donhopkins.com/home/archive/psiber/cyber/litecyber.ps...

The source includes a twisty little version of QuickSort implemented in PostScript by Don Woods, who also wrote Adventure!

PSIBER is a terribly ugly example of 7775 lines of PostScript code that draws and edits and debugs other PostScript code, but here's some better code that is well commented and meant to serve as a programming example, which configures, draws and orders pizzas:

PizzaTool source: http://donhopkins.com/home/archive/NeWS/pizzatool.txt

PizzaTool man page: http://donhopkins.com/home/archive/NeWS/pizzatool.6


If you like everything being an expression, check out tcl. A lot of ideas from lisp show up in tcl, especially the idea of everything as an expression. Tcl embodies this idea while also having the look of an algol-like language. Funny it can pull this off while having basically no syntax.


Rust also has (almost) everything being an expression - things like this aren't uncommon:

    let x = if something {
        foo()
    } else {
        bar()
    }
Things which don't have a logical value evaluate to `()` (the empty tuple), I believe.


> Things which don't have a logical value evaluate to `()` (the empty tuple), I believe.

If Rust follows Scala then `()` is not the empty tuple, but rather Unit (void in C*).


The empty tuple/unit is not the same as void. It has exactly one possible value, void has zero possible values. A way to write it in Rust is `enum Void {}` (an enumeration with no options).


The point is that both void and Unit can only produce a side effect. In everything-is-an-expression based languages Unit is exactly equivalent to void in C*.



It's the same thing. It's a type with only a single value.


`void` in C does not have a value. You can't make a variable and put a void in it because there is no such object as "void".


That is an artificial restriction if C. Also see how ! In Rust is also loosing it's artificial restrictions.


Indeed, and there has been some talk of removing this restriction in C++ because it makes certain kinds of metaprogramming a lot more cumbersome than they should be.


Yes, and () isn't void; the poster above me is wrong. Void is...the absence of a type? It's awkward.

However, () and Unit are.


It's not the same thing, a value of type Unit can only produce a side effect (or do nothing at all).


How is that different from a value of type "empty tuple"?


True, neither are useful as values, though Unit typically conveys programmer intent (to produce a side effect), whereas the empty tuple is, in Scala at any rate, quite rare.

The empty tuple:

    scala> val empty = Tuple1(())
    empty: (Unit,) = ((),)
vs. Unit:

    scala> val empty = ()
    empty: Unit = ()


Isn't that a tuple containing a unit, and therefore not empty?


It's the closest you can get in Scala to represent an empty value whose type is `Tuple`.


The difference between an empty tuple (also known as unit) and void becomes obvious when you deal with vaguely complex trait impls. For example, if you have a trait:

    trait Foo {
        type ErrorType;
        fn bar() -> Result<u8, Foo::ErrorType>;
    }
How would you specify that your type implements Foo in such a way that bar() cannot return an error? If you were to implement it using the empty tuple (unit), like this, it could actually return an error:

    struct Abc{}
    impl Foo for Abc {
        type ErrorType = ();
        fn bar() -> Result<u8, ()> {
            Err(()) // oops, we don't want to be able to do that!
        }
    }
Instead, you can use Void here:

    enum Void{}
    struct Xyz{}
    impl Foo for Xyz {
        type ErrorType = Void;
        fn bar() -> Result<u8, Void> {
            // No way to create a Void, so the only thing we can return is an Ok
            Ok(1)
        }
    }
Aside from the obvious power to express intent (can we return an error without any information attached, or can we not error at all?), this would allow an optimizing compiler to assume that the result of Xyz::bar() is always a u8, allowing it to strip off the overhead of the Result:

    fn baz<F: Foo>(f: F) {
        match f.bar() {
            Ok(v) => println!("{}", v),
            Err(e) => panic!("{}", e)
        };
    }
    ...
    baz(Xyz); // The compiler can notice that f.bar() can never return a Result::Err, so strip off the match and assume it's a Result::Ok
A super-smart compiler would even make sure it's not storing the data for "is this an Ok or Err" in the Result<u8, Void> at all.

Finally, similarly, you can specify that certain functions are uncallable by having them take Void as a parameter.


I don't think anyone is claiming Void in Scala/Haskell/Rust is equivalent to Unit in Scala/Haskell/Rust.

The question here was Unit vs empty tuple.

Up-thread was the question of whether C "void" is more like S/H/R Unit or S/H/R Void.


Upthread was:

> If Rust follows Scala then `()` is not the empty tuple, but rather Unit (void in C*).

The implication is that () == Unit == void. The empty tuple and unit are essentially equivalent aside from name, void is something else.


Regardless of who is right, you are arguing the wrong bit of it. I contend that, to a person, everyone saying "C void is Unit" doesn't think C void is Void. Arguing that Void is not Unit is just completely spurious. Of course Void is not Unit. No one disagrees.

In truth, C void is not exactly either Void or Unit. Like Void, you can't exactly make one... but you can call functions declared to take it and write functions that return it, and really it just means "I have no information to pass" - which is more like Unit.


Type names are irrelevant here. Unit could be called "Tuple0", or be defined as a synonym of a type named such. The semantics are identical.


> For historical reasons and convenience, the tuple type with no elements (()) is often called ‘unit’ or ‘the unit type’.

https://doc.rust-lang.org/reference.html


void in C can't be created, used, or passed around - () can.


But void pointers can, and are often used.


So in Rust, the equivalent of void* with references instead looks like:

    enum Void{}
    fn foo(v: &Void){ ... }
    let bar : Void;
    fn abc() -> &Void{ ...}
    fn xyz() -> Void{ ... }
You can't create a Void, and you can't cast to it either - it's not a bottom type. So you can't put anything in bar. And as a result, you can't create a reference to a Void, so you can't call foo. And abc and xyz just can't be implemented in the first place.

On the other hand, you can do all of these just fine:

    fn foo(v: ()){ ... }
    fn bar(v: &()){ ... }
    ...
    let v = ();
    bar(&v);
    foo(v);
The fact that you can create and use an empty tuple as a value shows that it is not equivalent to Void.

(All statements here are made within the safe subset of the language - unsafe allows access to intrinsics that would allow a Void to be made, and a reference to Void.)


void * isn't related to void (or this discussion), they just reused the keyword.


This example of "everything as an expression" doesn't take it as far as Tcl, though. In the above code snippet, the conditional body is surrounded by braces, which are syntax. In Tcl, the second argument to the 'if' command is also an expression, which only uses braces as a quoting mechanism, if it needs to.


I'm not sure I understand why it matters if the syntax requires braces or not. The things inside the braces are still expressions.


It matters because if the braces are not syntax, you can decide what code to execute as the condition body at runtime.

    set condition-body {puts "Hello, world"}
    if { $condition } $condition-body
To get this to run, you need to splice in the condition-body like so,

    if { $condition } {*}$condition-body
But you get the point.


But the "then" and "else" parts of the Rust "if" are expressions. What you're talking about doesn't seem to be related to which things are expressions, it's more like being able to eval code at runtime.


The if expression is not a persuasive example. C/Java/Algol/etc all have it as well.

x = something ? foo() : bar();


The difference is that C/Java/Algol have different syntaxes for things-as-expressions and things-as-statements, and you can't put blocks in the things-as-expressions. In Rust, blocks are also expressions and so have a result (the result of the last expression in the block), so your expressions inside the if can be as complex as you like.

Since functions also have a block, and the return value of the function is the result of the block, this is much more consistent.


In GNU C, you can use a brace-enclosed statement block as an expression.

Funny story; years before I became a C programmer, and at a time when I didn't yet study ISO C properly, I discovered and used this extension naturally.

I wanted to evaluate some statements where only an expression could be used so I thought, gee, come on, can't you just put parens around it to turn it into an expression and get the value of the last expression as a return value? I tried that and it worked. And of course, if it works it's good (standards? what are those?)

Then I tried using the code with a different C compiler; oops!

Anyway, this GNU C feature doesn't have as much of an impact as you might think.


I see two issues with the ternary operator. One, the syntax is much less readable, and two, the consequent and alternate are both single expressions, so you can't do something like:

x = if(something) { a = foo(); baz(a); } else { b = bar(); baz(b); }


It should work in GNU C:

  x = (something) ? ({ a = foo(); baz(a) }) : ({ b = bar(); baz(b); });
However, since all your forms are actually expression statements, we can happily just use the ISO C comma operator:

  x = (something) ? (a = foo(), baz(a)) : (b = bar(); baz(b));
If C provided operators for iteration, selection and for binding some variables over a scope (that scope consisting of an expression), everything would be cool. E.g. fantasy while loop:

  x = (< y 0) ?? y++ : y;   // evaluate y++ while (< y 0), then yield y.
Variable binding:

  x = let (int x = 3, double y = 3.0) : (x++, x*y);
The problem is that some things can only be done with statements.

The ternary operator is not actually lacking anything; with the comma operator, multiple expressions can be evaluated. What's lacking is the vocabulary of what those expressions can do.


I've always read ternary statements to myself as a question.

  some_condition ? this : that
some_condition? then this, otherwise that

Typing this, I realize how hard it is to explain without speaking it :)


Is the color red? If so, this; otherwise, that.

It's a little tricky, because in the above, the `if` keyword appears after the question mark.

Is the color red? Yes-- this: no-- that.

Trying to make a parsimonious English sentence while maintaining the syntax elements : P


> some_condition ? this : that

Just read it as:

  IF some_condition THEN this ELSE that


You compute baz in both branches, you can refactor.

    x = baz(if (something) foo() else bar())


If you insist on the assignments:

x = something ? ((a=foo()), baz(a)) : ((b = bar()), baz(b));

Otherwise:

x = something ? baz(foo()) : baz(bar());


a and b have to be defined before the statement containing the terniary expression here. And the point is that you can embed arbitrary multi-statement logic in your if expressions in Rust in the same way you'd do it anywhere else.


Tcl does have a lispy feel to it, but s-expressions in Lisp are much more elegant than strings in Tcl, imho. Greenspun called Tcl the Lisp without a brain[1], which can be taken both as a compliment or an insult.

[1] http://philip.greenspun.com/tcl/introduction.adp


Tcl commands are lists, not strings. Or more precisely, they are coercible to strings or lists, but a well-formed Tcl command string is always coercible to a well-formed Tcl list (not all Tcl strings are coercible to lists).


Thanks for the correction. It's been a while since I used Tcl and my memory of it was incorrect.


It used to be strings all the way down before the object system was implemented in 8.0.

That incidentally is part of the reason why the expr command was created to process infix expressions since it would be too costly to keep converting sub-expressions back and forth from strings to numbers.

http://www.tcl.tk/software/tcltk/8.0.html


TCL, while it has its warts, is a really cool language. I mean, it even basically has fexprs, something most Lisps put by the wayside years ago.

When you don't actually care about speed, you can do some pretty cool stuff.


Yes, I worked on a startup that did pretty much something like Ruby on Rails, but with TCL, inspired by AOLServer.

The speed critical parts were written in C and loaded as TCL extensions.

Back in the first .com wave.

It also taught me to never again use a programming language without JIT/AOT compiler on their standard toolchain for heavy loads.


Interesting. There was this company called Vignette (in the same period). They too were doing .com projects for clients, and, IIRC, using AOLServer which I read was in Tcl. Tcl was used for the projects.


Yep, some of our guys went to work for them in projects across the Iberian Penisula.


That's a good idea.

Although they've been working on TCL perf (and even compilation) and there have been some improvements, especially when you're not doing metaprogramming. But still, using it on heavy loads isn't a great idea...

(I mean, honestly. I'm starting to think picolisp might be faster, and picolisp has 3 types and one data structure. Haven't run the benches yet, though.)


Hmm, was that the one headed by Phil Greenspun? Arsdigita or something like that...?

Back in the day, aolserver w/ TCL + (open)acs + pgsql/oracle was teh awesomeness compared to LAMP that everyone else was doing. Oh well... :(


No, in Portugal.

We were eventually acquired by a company doing helpdesk and CRM software.


Yeah the level of dynamicness (dynamism?) you can get in tcl is unparalleled as far as I can see. Having no types or syntax and access to the entire runtime at any point in the program opens up all kinds of crazy doors. But you're right, that slows it down.

But you might be interested to know there is currently an effort to get Tcl to compile to native/near-native code. Here is a paper on the new techniques being developed and a link to a talk given by one of the lead tcl core team members.

http://www.tcl-lang.org/community/tcl2015/assets/talk14/TheT...

https://www.youtube.com/watch?v=RcrqmZV88PY&t=38s


I believe IO lets you do the same. Everything is a prototype with slots, communicating via messages.

I guess Self was the same way. Self & Smalltalk also give you access to the entire runtime.


But TCL actually goes beyond that.

Most languages of this sort, like Smalltalk, Lisp, and especially slower, more liberal implementations like PicoLisp, allow for compile-time and/or runtime AST transformation, and other sorts of metaprogramming.

in TCL, everything is a string. Or at least, everything behaves like a string in the proper context. When you pass code blocks into a command (like if, or while, or whatever), you're not passing code objects: you're passing unevaled strings. This is why expr works in TCL: there's nothing special about expr, it's just actually implementing a DSL (sort of) rather than evaling your code straight up.

The practical upshot of this is that unlike smalltalk (I think: Can an ST user actually answer this?), and to a greater degree than LISP and FORTH (;immediate and readtables are a lot more painful to wrangle), you can not only modify the semantics of the language: you can modify the syntax.


Smalltalk just like Lisp has very little syntax, almost everything is built on messages, including data type creation, conditionals and loops, among other things.

Also you can at any time just completely replace one object by other via the becomes: message.

There are also some cool tricks when metaclasses are used, many of each one can see in Python as well.


However, IIRC, unlike Lisp, ST doesn't have any AST transformation or parsing hooks (macros and readtables, in Lisp parlance), so while ST has a lot of the semantic extension capabilities of Lisp (indeed, it's more semantically extensible than some of the less Object Oriented Lisps), it lacks the syntactic capabilities for extension.

However, you know ST better than me. Am I right?


As far as I can remember no (Smalltalk was long time ago for me, 1995).

But I think there is already quite a few things possible via messages and metaclasses, even if one cannot do actual AST transformations.

After all, the whole image is accessible, so you can dynamically ask any object for its definition, or even compiled code (bytecode or JIT) and change them.


>But I think there is already quite a few things possible via messages and metaclasses, even if one cannot do actual AST transformations.

That's true. In fact, there are things that messages and metaclasses can do that you can't do with AST transformations without implementing those abstractions.

>After all, the whole image is accessible, so you can dynamically ask any object for its definition, or even compiled code (bytecode or JIT) and change them.

I still miss this in Lisp. Some Lisps had that, once, but it's uncommon nowadays. it happens in the commercial CLs, but those are expensive. OS CL implementations rarely have good image support (in SBCL, image saving actually corrupts the RAM state to the point of nonrecoverability, and the docs recomend fork(2)ing if you want to save an image and continue your app).

Aside from the proprietary CLs, PicoLisp is the only modern lisp environment that has this kind of dynamic capability AFAICT (and while it does sort of have images, in the form of external symbols, the language doesn't encourage using them like this. Also, calling it modern is a stretch: it has more in common with LISP 1.5 than, say, CL). And the Schemes? Don't make me laugh. Scheme has many strengths, but reflection isn't one of them. It's something that I really wish the Lisps had.

One of the reasons I want to try my hand at implementing Lisp on the Spur VM at some point.


Picolisp can match it in dynamism.

Seriously, Picolisp is absolutely insane.


I just read a bit of documentation from the Picolisp page. It looks really cool, but can it reach arbitrarily far up the call stack? That's the quality of tcl that I don't see other places. The capabilities of 'upvar' and 'uplevel'.

[Edit] I should say "one of the qualities." The other important one is that Tcl has no types. Even all the lisps I know have types.


The functions you're looking for are called `up` and `run` in picolisp.

And before you ask, yes, picolisp has list interpolation (or quasiquoting, in lisp parlance), so you can control which parts of a certain chunk of code will be run when, and in which contexts. It also includes the `macro` fexpr, which makes interpolation more convenient.


Wow, very interesting. Thanks for pointing out picolisp!


>The other important one is that Tcl has no types.

Is it that it has no types or that everything is a string? asking, not stating.


Yes, everything is a string. But you can pass data around anywhere you like without worrying about coercing. A command will receive its data in string representation and treat it however it likes. So when you go to do math with the 'expr' command, 'expr' will treat its arguments as numbers.

    set x 5
    expr { $x + 3 }
'expr' receives x as the string 5 but knows to treat it as an int.

I'm not sure that's a great explanation. Maybe a decent summary of the concept of 'no types' is "the language does not presume to tell you how you can or cannot use your data."


Thanks. Yes, I get it. I had read part of the Ousterhout book. And also Don Libes' Expect book uses Tcl, since Expect is written in it. Had read a lot of that too. Both very good ones. Didn't get to use Tcl in projects though. I'd read some years ago that Tcl was used heavily in the electronics / EDA industry.


Pity that it became so associated with the Tk GUI toolkit -- half the Linux GUI apps in the 1990s were in Tcl/Tk, and when Tk fell out of favor so did Tcl.


...And with TTK, you can finally have easy-to-write cross-platform UIs that actually look native.

Seriously. If you want a decent cross-platform UI system with minimal effort, which has bindings in just about every language, TK is really worth your time now.


Tk UIs will never be acceptably native as long as scrollbars remain a separate control from the thing being scrolled. This stops OSes deciding where the scrollbars should go based on input device and locale.

On OSX in particular, Tk UIs always stick out like a sore thumb for this reason, even with TTK.


I have looked at it but don't find examples that truly look convincing,and the info in the official page look very sad.

http://wiki.tcl.tk/9485


That's outdated and unofficial. Look at some examples with ttk.


Ok, but where?

TTK is something new?


Should have been made available on the web by some browser vendor in the 90s. Netscape invented a language, Sun wanted to embed Java, but went with the applet approach in lieu of Netscape, Microsoft put vbscript in IE.


Correct me if I'm wrong but doesn't Ruby have everything (maybe just most?) things be an expression. I always liked

    x = if condition
          something
        else
          something_else
        end


Everything is an expression in Ruby, yes. Even `class` and `def` and `module`, for example. Though class, as a specific example, returns nil, so it's not terribly useful that it is one.


In Lisp, "def..." macros, like defclass, defun, etc. return the symbol to which is bound the definition. Not essential, but useful at times. I tend to find Lisp is full of details like this.


Well, Common Lisp!

In the Scheme language, many imperative forms have an unspecified result. For instance, see R7RS 4.1.6: "the result of the set! expression is unspecified".

Also, a related misfeature is that function arguments can be evaluated in any order.


Yes, this is one of the things that I really want to punch the SSC for. Assignment should have a useful return value: even C, so often said to do The Wrong Thing, got that right.

But for those of you who aren't scheme programmers, it gets worse: because when RnRS says that the result of something is "unspecified" many implementations take it literally. That's right: in many Schemes, `set!` returns the literal value #<unspecified>. I swear I'm not making this up. It's awful.

To quote Jonathan Gabriel of Penny Arcade: "Baby, why you always gotta make me hit you?"


Yes, sorry, Common was implied (not claiming it should always be, but here I wrote it without thinking about others). Scheme's unspecified results are annoying indeed.


Your profile says "Ask me about Tcl".

How?


I don't know many other tcl hackers, and I'm really interested in what the community looks like right now. I put that there to drum up conversation with a tcl hacker.


You can reach me via my profile.


Is there any literature on how to translate languages where everything is an expression to ones where it isn't, for example when compiling/transpiling to java? If/then can be translated to the ternary operator, but terms like try/catch are trickier.


An easy to follow example is Coffeescript (where everything is an expression), which transpiles in javascript (where not everything is).


Usually when I'm faced with putting a try/catch in an expression in Java, I end up wrapping it in it's own method.

It's not the nicest solution, but it would allow targeting Java without disrupting the code structure too much.


In Rust, if-else is an expression, and there's no separate ternary operator.


Yep I know. Rust although syntactically looks like an Algol [1] like language it actually barely is one :) . It is more akin to a ML language [2].

[1]: https://en.wikipedia.org/wiki/Generational_list_of_programmi...

[2]: https://en.wikipedia.org/wiki/Generational_list_of_programmi...


My current project is over 90,000 lines of C and I have programmed in C since the 1970's (as well as many other languages). I have never used a ternary operator and I think it reads quite poorly. I just use a few lines of easily read code instead. I also never use a do/while construction with the condition at the bottom of the loop. When I need such a construct I just put in an "if" and "break" where ever needed instead. I don't think C should be dissed because of a few unneeded and probably unused features. In C, you can put more than 1 statement on a line if they are separated by a ';' but I have never done that in hundred's of thousands of lines of C code.


I don't program in C but I use the ternary operator quite regularly. It's quite handy when you're populating large data structures. For limited use in that case I don't find the readability that diminished maybe even enhanced.


It's been a while since I used Common Lisp but isn't it recommended to use "if" for conditional expressions and "when"/"unless" for conditional statements?


It's a matter of code readability. "when" and "unless" are expressions too, but they look the best when they're used instead of "if" for single-branched conditionals when you don't care about return value.


Yes - from what I recall it definitely helped to make things clearer to use "if" and "when" appropriately.


The choice exists only if you have a single form to evaluate subject to the condition. If there are two or more, of course this is imperative programming and (when condition form1 form2 ...) recommends itself over (if condition (progn form1 form2 ...)). If there is only form1, either one will do, so then it's down to a decision based on whether form1 returns a useful value and/or performs a side effect.


Probably a bit off-topic: I always get frustrated with C++ because `if` is not an expression, you can't do conditional initialization under RAII easily.


if-else is an expression in Kotlin ( https://kotlinlang.org/docs/reference/control-flow.html )


It's interesting, I'm reading Black Swan at the moment by Nassim Taleb, and one of his big rants is about how we get blinded by idealized, platonic forms and ideas when the real world is messy and inherently unpredictable. E.g. trying to explain the forms of nature with platonic archetypal shapes like circles, rectangles and triangles. Lisp and the community around it kinda has that flavor - getting lost in a world of "pure forms" and grand ideas, but downplaying the important but messy practical reality of hardware, useful libraries, and getting cool stuff done with a minimum of fuss. I'm periodically fascinated by Lisp (I wrote an interpreter or two in C) but I wonder if its "Platonicity" is part of its downfall.


I don't think actual Lisp programmers share this obsession with purity and ideal forms. It's more something that shows up in blog posts about Lisp by people who probably don't actually use it. The title of this one is telling: it's about "the idea of Lisp."

On the other hand, if you look at, say, ANSI Common Lisp, it's not at all some kind of perfectionistic attempt at divine elegance. It's a pragmatic compromise resulting from years, decades, of actual use on real computers.

Just browse around the SBCL compiler source code and you'll see that this stuff is developed by people who definitely aren't afraid of the messy practical reality of hardware:

https://github.com/sbcl/sbcl/tree/master/src/compiler/x86-64

Generally spend some time within the Lisp community and see how many people you see fretting over Platonic archetypal shapes and compare to people solving actual problems and using the language as just a nice way to program a computer.

Emacs is another example that demonstrates the spirit of actual Lisp programming as opposed to armchair theorizing about lambda calculus fundamentals.


That's good perspective. My lens on it has mostly been stuff like SICP, Paul Graham's old essays, periodic Lisp HN posts etc. Good to hear there's a community out there that's fine with the messy practical stuff.


Specially since Lisp was once upon a time, also a systems programming language.


ANSI Common Lisp is rather a design-by-committee monstrosity which was forced on the unwilling Lisp vendors by the Defense Department.

Most of the feature set was designed via backroom political horse trading ("We'll let you include pet feature X if you support us for our pet feature Y".) There is no coherent overall plan or design to it at all.

(Source: personal communication from a member of the committee that designed it.)

It's based on actual use on real computers of the late 1970s and early 1980s -- e.g. the file opening mechanism is a complex abstraction designed to support filesystem paradigms that nobody has used for 30 years, yet there is no standard way to open a TCP socket.

I heartily recommend Clojure (clojure.org) as an alternative: a modern, pragmatic Lisp designed for 2016-era software engineering.


Given that most of the hundreds of Lisp implementations each have their own way to open a TCP connection, Clojure just added another incompatible one. To claim that it is a standard one, is a bit funny. Each of the hundreds other implementations could claim that, too.

Clojure generally added incompatibilities, since it is fully incompatible to any other Lisp before in fundamental ways. Clojure was designed with zero backwards compatibility. Lisp concepts were removed, renamed, redesigned. Even identifiers with the same name are doing completely different things. If it did something similar to what Lisp did, Clojure sure has it renamed and redesigned.

I doubt that you ever had talked to anyone from the ANSI CL committee. It would also have been easy to find out that Common Lisp was designed by a few core people (the gang of five) with lots of community input from 1980 to 1984. This part is well documented. 1984 the first version of Steele's book Common Lisp the Language was published. The ANSI Common Lisp standardization was started later in 1986, when the core of Common Lisp was already defined. Even there the major extensions were designed by small groups with community input. See for example how CLOS was designed by a few people (Daniel G. Bobrow, Linda G. DeMichiel, Richard P. Gabriel, Sonya E. Keene, Gregor Kiczales, and David A. Moon.) and by providing a complete reference implementation (PCL).


You missed the point. Clojure's way of opneing TCP connections is standardized within Clojure -- all Clojure programs use a standard API call to do that.

Common Lisp, on the other hand, has no standard way of opening a TCP socket at all, because TCP was uncommon when it was designed. It relies entirely on (poorly documented, often unmaintained) third party libraries to do that.

As for your doubts, you can doubt all you like, that changes nothing. I am well aware of the history of ANSI CL, and I'm not sure what point you are trying to make with all the namedropping.


> all Clojure programs use a standard API call to do that.

Clojure says nothing about creating TCP sockets, since Clojure implementations (all three) need to call the hosting systems call or emulate it somehow. The JVM Clojure uses a different call than the CLR Clojure.

Which makes it worse than Common Lisp, which has widely used socket support with usocket and some others.

> Common Lisp, on the other hand

Is a real language standard with many different implementations.

> It relies entirely on (poorly documented, often unmaintained) third party libraries to do that.

Each Common Lisp implementation has a documented and maintained way to open a TCP socket. Additionally there are compatibility layers like usocket

https://common-lisp.net/project/usocket/

> I am well aware of the history of ANSI CL,

Then why are you writing obviously wrong things?


Clojure's an odd, unlispish language, at least for us Lispers and Schemers.

CL is still very good at what it does: all current implementations have solved many of the problems you mentioned, and there are a lot of libraries that will run across implementations.

CL is a beast, but clojure is a mess. Scheme is elegant, but has a radically different, more ALGOL mentality than CL, at least in some respects. Some of it is good, some is bad. I love it, but there are things that I would rather program in CL any day.

IMHO, clojure has neither the practicality of CL nor the elegance of Scheme. It has its advantages, but it's not ideal.


Could you explain what you think is wrong with Clojure? Maybe it's because I've spent far more time using it than CL, but I see Clojure as having a very consistent, well designed core. It's very opinionated in it's design, but it's a practical and pragmatic one. I don't see what's unlispish about it.


I use Common Lisp, and now sometimes Clojure. I don't get why folks call Clojure a modern-Lisp. I miss the cons cell abstraction, the multiple values support, condition system, the fast start-up times, the native compilation, and executables.

Stacktrace in Clojure with no possibility to restart also make me sad.


All of these are rather minor secondary issues not related to the core Lispiness of the language (with the possible exception of cons, which is in any case still not terribly important in a practical sense as there are mostly equivalent alternative ways of organizing your code.)


Conditions are something I miss in Scheme. They sort-of-not-really exist in my Scheme of choice, and it makes me sad.


Well, there's the fact that lists aren't conses, and conses as Lispers would expect them to exist don't. I think conses are pretty useful.

But Clojure has the cons operator anyways. This also means that clojure's `read` violates one of read's important guarantees: the structure you read in will be identical to the structure you wrote out.

Then there's the macro system. Given, it's better than CL's in some respects (it does what you want by default), but there are problems. Like not being able to use macros inside the packages that they're defined it. It's just generally less clean than Scheme's solution, and less versatile than CL's.

And then there's all the things its inherited for Java: and object system that isn't properly OO, a lack of TCO (which would be fine, if it weren't for the fact that the language so very clearly wants to have TCO)

At the end of the day, Clojure's fine. It's not terrible or anything. I just disagree with some of its design decisions, just like with Racket.

But I do object to it being called The One True Lisp for practical use, because that's nonsense. Scheme and CL are both quite practical, and while most Schemes/CLs don't have the Java interop that makes Clojure so full of nice libraries, most of them do have a C FFI. And the C FFIs they have (at least in the Schemes I've seen) are some of the nicest around.


I agree that it isn't the One True Lisp, because there will never be one. Clojure is just a nice, modern lisp with some ideas I really like at the core of it's design.

I don't really care about having conses the way CL has them, and seqs feel like a very nice abstraction over the concept, though I also tend to use the idiomatic map-heavy style anyway.

Clojurescript has the staged macro issue, but JVM Clojure doesn't.

I wouldn't even say that Clojure has an object system, and wouldn't want one. Lacking TCO is a shame.


All that immutability humbug, for starters. Traditional Lisps are "everything you can access is mutable".

Lisp trusts the programmer to be responsible with mutability and not to abuse it. The trust is not misplaced; the sky doesn't fall.


Immutability was what I had in mind calling it opinionated, and it's an opinion I agree with. After learning Clojure, I now hate working in languages with mutability. It obviously isn't necessary to make working applications, but I like not having to think about where else I'm passing this particular data structure, and the language has enough escape hatches for places when it really would get in the way.


This is merely a psychological issue, because when you're programming in an "everything-mutable" Lisp dialect, you also don't worry about this. If you don't know where else you're passing that data structure and don't care to find out, then ... you don't mutate it. Libraries and API's simply don't mutate the inputs that you pass to them, unless loudly documented otherwise. Thus, mutation is applied in controlled ways whose scope is easy to ascertain and limit by inspection.


...Especially in Scheme, where mutation is, by convention, loudly proclaimed! (not-quite-pun intended)


> yet there is no standard way to open a TCP socket.

The 'standard way' to create a TCP client socket in Clojure:

    (java.net.Socket. ^String host (int port))
For the .net version:

    (System.Net.Sockets.TcpClient. ^String host (int port))
It directly calls the functionality from the platform it is hosted on.


Yes. That's the point. The one standard way to open a TCP socket is with java.net.Socket.

Common Lisp, on the other hand, has no such standard way, as TCP was not common when it was designed. It relies on a hodgepodge of poorly documented third party libraries, or vendor-specific extensions to do this.

Clojure is, by definition, a JVM hosted language. The .net version is non-canonical.


^> The one standard way to open a TCP socket is with java.net.Socket.

So Clojure the language has no documented standard way to open a socket. One has to use the host environment (JVM, .net, ...) interface to do so. It means also that Clojure/JVM source code trying to open a socket will not run on Clojure/CLR without changes.

That's not a 'standard'. It's the definition of 'implementation specific'.

That's different from SBCL, which has a documented and maintained way to create sockets, which works similar over all the platforms it runs on - natively.

http://www.sbcl.org/manual/#Networking

For a portable way to create sockets over many Common Lisp implementations use usocket:

https://common-lisp.net/project/usocket/


Why do you have to resort to such bullshit in order to promote your favorite language. Please back up your claims.


What about ISLISP? Granted, there are much fewer implementations compared to CL.


But then again, there's Scheme ...


Even Schemes can take a practical turn, e.g. Racket.


Racket isn't Scheme. It is its own dialect. That's why it's no longer called Scheme.

At present, there are only a few Schemes that are practical: Chicken, Guile, Chez, and Gambit seem to be the big players, with Cyclone, Chibi, and Bigloo bringing up the rear.


Wow, pedantic much? It started out as a pure Scheme, then "took a practical turn" as the parent said. You can still make it act like a pure Scheme with a #lang directive.

It's also kind of funny that a handful of major, practical implementations is considered a low amount. Number of Python implementations? Between one and three, depending on definition. Number of Javas? 2 or 3 again. Number of Clojure implementations? One. Javascript? Half a dozen at most.

Seems to me like the problem is standardization between these implementations, not overall number.


>pedantic much?

Not at all.

>It started out as a pure Scheme, then "took a practical turn" as the parent said.

False.

PLT Scheme, like all serious schemes, started out as Scheme extended for practical use.

Racket, what PLT Scheme became, isn't an extended Scheme: it is a different language with its own semantics. That's not good, bad, or pedantic, but it is true.

>You can still make it act like a pure Scheme with a #lang directive.

No, you can make it use RnRS. The difference is that in actual Schemes, the language you write in is extended RnRS, not an entirely different language.

At the end of the day, calling Racket a Scheme is like calling Java a C: they look similar, but they are radically different under the surface.


I think the truth is somewhere between what you and dTal said. First, Racket isn't much more different from "Scheme" now than when we changed the name in 2010. Second, if you copy-and-paste a random portable Scheme program into #lang racket, it will probably work. This is somewhat less likely than with most systems that call themselves Scheme, but way more likely than with Scheme's closest relatives (Arc, Clojure, etc), and similarly with Java vs C.

Ultimately, Scheme is a language family, and Racket is further away than most or all of the other members, but there's considerable distance between Guile and Gambit as well (to pick some other examples).

Full disclosure: I'm a core developer of Racket, but not all of us have the same perspective on this question.


Number of Clojure implementations? One.

At least three:

- Clojure on the JVM: https://github.com/clojure/clojure

- ClojureScript (JavaScript): https://github.com/clojure/clojurescript

- Clojure on the CLR: https://github.com/clojure/clojure-clr


I stand corrected. That was the only one I didn't check out. I didn't know about Clojure on the CLR but in fairness I should have remembered Clojurescript.


Or Chicken.


Yes, I really like Taleb's books because they are about the gap between theory and practice -- and in particular, how to make bets and take action to discover that gap.

His later books go into this too: how academics rewrite history in favor of the ideas they created, but "tinkerers" create history.

I believe he uses the example of the Wright brothers. Were they physicists? No, they were engineers and tinkerers. And what is amazing is that people still argue about the physics of how planes fly!!! Practice often precedes theory.

Whether there's a similar phenomenon in computing is an interesting question. Computing is sort of special because the finished product, a program, is probably the closest thing to a pure idea that you will find in engineering (as opposed to a plane or a telescope). It is created almost entirely in the human mind.

On the one hand, you could say that what you learn in school is idealized and gives academics too much credit. To use a recent example, what was the contribution of Phil Katz vs. academic research in compression? That would make for an interesting essay I would love to read.

What about BitTorrent, or BitCoin? I believe plenty of academics were trying to create systems like BitTorrent, and publishing papers about them, but Bram Cohen said he pulled a bunch of magic numbers out of his butt and dealt with router quirks, and made it work. But certainly he also used computer science.

I like this essay, "Notes on postmodern programming": https://scholar.google.com/scholar?cluster=16064138633971247...

Some excerpts:

The word “algorithm” is often claimed as the central concept of computer science [35] “Algorithm”, however, leaves out large amounts of the discipline of programming: components, patterns, protocols, languages, data structures [76].

There is equal acceptance of high and low culture: Visual Basic and Haskell are equally of interest, as there is no reason to applaud the one and disparage the other

Postmodern programming rejects overarching grand narratives. As a result, it favours descriptive reasoning rather than prescriptive. Rather than working top down from a theory towards practice, postmodern programming theories are built up, following practice.


This is the challenge I have as well. I want to use Lisp for a lot of things but the reality is that parsing PDFs, tagging parts of speech, and then throwing it all into Postgres/Elasticsearch is a lot easier with a bunch of "gem install" commands than anything I've seen for Common Lisp.


(ql:quickload "packagename") works great in Common Lisp.

The QuickLisp package manager has been around for a few years now.


QL is a package manager. I'm talking ecosystem/libraries. Use QL to install postmodern and tell me how you feel about its time zone support.


If you want to see a Lisp desiged for practicality (other than CL, which definitely was, make no mistake), Kaz Kylheku (who goes by kazinator around here)'s TXR is a neat thing. It's a combination of Lisp and a pattern matching language, desiged for data-munging tasks similar to those done by AWK and Perl. But it's actually better than AWK (I won't go so far as to say it's better than Perl: it is, however, more regular than Perl, which is a nice thing), because the pattern matching language was designed with multiline records in mind, and is a lot more readable than a long regex, IMHO. And the built-in lisp was deliberately optimized for succinctness, something many lisps don't do well. You can find some examples at http://www.nongnu.org/txr/.

And if that's not practical, I don't know what is.


Quite right. That's why Clojure was invented.

Clojure is a modern Lisp with the explicit goal of being a practical and pragmatic Lisp intended for getting real-world software engineering tasks done, not just a grand walled garden with beautiful and pure ideas but without standard libraries for stuff like, say, "opening a TCP socket".


CL has no ISO standard library for opening a TCP socket.

Clojure has no ISO standard library for opening a TCP socket.

I don't see the difference.

Usocket: Common Lisp socket library:

https://common-lisp.net/project/usocket/#implementations

Walled garden? I can make a native executable for Windows using any one of several Lisp implementations. No framework or VM or whatever required.

Clojure saves some people from programming the JVM in Java; good for them.


> This is a proof by construction that the language is computationally complete.

The definition of Turing completeness in the article is not correct. A language being able to execute programs written in itself is not a sufficient condition of Turing completenes. Trivial example: define a language with one pre-defined term, x, which is a routine that takes as input a string, checks if it's "x", and executes it if it is. The empty language is a counter-example as well, but that's cheating.

I'm also not sure if the use of the phrase "fixed point" is a misunderstanding of the definition of a fixed point or just an unfortunate use of a term that already has great significance in LISP.


I agree...I think a real "proof by construction that the language is [Turing] complete" would just be an interpreter for Turing machine programs written in Lisp, which is pretty boring.


Nice article. Check out Paul graham's The Roots of Lisp for a similar exploration in which he shows how to build the metacircular interpreter.

> John McCarthy wrote 6 easy things in machine code

It was actually Steve Russel, McCarthy's grad student, who had the idea of writing McCarthy's eval function in machine code.


> It was actually Steve Russel

I seem to recall reading that McCarthy was actually surprised to discover that Lisp _could_ be run by a real computer; he intended it to be a completely theoretical tool.


I have also heard that, but only from secondary sources. Here is a clip of Russell talking about the time he wrote the first lisp interpreter. He doesn't mention McCarthy being surprised, but he seems to imply that he, Russell, was quicker to grasp the idea of translating the functions McCarthy had been writing to machine code.

http://www.computerhistory.org/pdp-1/1020b307d766e0019de2b4a...


No, that's definitely wrong. Initially they compiled LISP code into assembly by hand (when LISP still looked a lot like fortran) and the plan was to write a compiler in assembly to automate that. Instead, McCarthy came up with a way to express LISP code (aka M-expressions) as data (aka S-expressions) to give a definition the LISP semantics in LISP itself. Steve Russell then hand compiled this definition, and lo, they had a working interpreter of S-expressions. M-expressions were never implemented and the compiler was written in LISP instead of assembly.


I'm working on a Lisp-based introductory programming book:

https://github.com/rongarret/BWFP

Still very much a work in progress. Feedback appreciated.


You may be interested in checking this out for inspiration: http://www.ccs.neu.edu/home/matthias/HtDP2e/

(It's the Intro to CS book used at my alma mater, teaching programming in Racket)


Tried doing some of that with High School kids -- I have to admit, at the beginning it was very difficult for them to wrap their head around the basic concepts in functional programming. The other issue was that the few syntactical rules and prefix notation, while great in the long term, required the kids to do a bit more of mental gymnastics for even basic things at the beginning, so that didn't help either.

But man after the initial hump, I was convinced that this is the route to go through in order to introduce someone to programming with very solid first principles.


That's why I'm using a library (https://github.com/rongarret/ergolib) to smooth over some of Common Lisp's rough edges. The goal of the book, what I hope will make it unique, is to teach all of the basics without having to get too hung up on the details of CL.

The reason chapter 3 is taking so long is that I can't figure out a good way to get around one of those details. I want chapter 3 to be about parsing i.e. I want the reader to build READ before they build EVAL. So I want to introduce READ-FROM-STRING, and one of the things I want to be able to read from strings is characters. Unfortunately, CL uses the same character (backslash) as the reader dispatch macro character for characters (e.g. #\x) as it does for the escape character in strings. So if you type #\x you get the character x, but if you type "#\x" you get a reader error because the backslash is consumed as an escape character inside the string. I have yet to find a satisfactory solution.


I have met people who were already earning a paycheck as professional C programmers, who didn't fully grasp what it meant to return a value from a function. If I were teaching beginners these days, I would do what SICP does and tackle the substitution model of evaluation head-on at an early stage.


For anyone interested, John McCarthy's original paper on Lisp is here:

RECURSIVE FUNCTIONS OF SYMBOLIC EXPRESSIONS AND THEIR COMPUTATION BY MACHINE (Part I)

http://www-formal.stanford.edu/jmc/recursive.html

From the page:

"This paper appeared in Communications of the ACM in April 1960. It is the original paper on Lisp."

I had mentioned it in this blog post in which I gave a few examples of doing simple computations recursively in Python (for beginners).

Recursive computation of simple functions:

http://jugad2.blogspot.in/2016/03/recursive-computation-of-s...


Lisp was developed because McCarthy needed a tool for experimenting with AI. Found a video of McCarthy talking about AI: https://www.youtube.com/watch?v=Ozipf13jRr4

And if anyone cares, here is nice Shirt with McCarthy on it ;) https://www.teepublic.com/t-shirt/666689-john-mccarthy-lisp-...

I think it should be mandatory for CS students to implement their own little Lisp using the building blocks McCarthy described! Instead they are learning Java and ist crappy OO...


Is there a kind of walkthrough/tutorial about how to develop a little Lisp interpreter? That sounds like a fun experiment.

PS: Sorry, I am a Java OO developper. But I like to learn :)


Yes, this quite good book, "Lisp in Small Pieces"

https://www.amazon.com/Lisp-Small-Pieces-Christian-Queinnec/...


For example: http://www.buildyourownlisp.com/

But there's plenty more.


Another one https://github.com/kanaka/mal (quite famous AFAIK)


What I did was follow Paul Graham's Roots of Lisp[1]. He lays out in simple terms the basic operations, all I did was copy his examples into unit tests and then write code until the tests passed. I didn't even bother writing a parser, I just used JS expressions like this:

    	[natives.label, factorial, 
		[natives.lambda, [n], [natives.cond,
			[ [natives.eq, n, 0], 1 ],
			[ true, [natives.times, n, [factorial, [natives.plus, n, -1]]] ]
		]]
	]
It took me about 5 hours to get the whole think working from his paper, and I learned a lot in the process.


There's this classic by Peter Norvig (of AI a Modern Approach fame) about wrting a Lisp in Python...which also happens to have an amazing title :D

http://norvig.com/lispy.html

Edit: He also links to his Scheme interpreter in Java in that article: http://norvig.com/jscheme.html


>Is there a kind of walkthrough/tutorial about how to develop a little Lisp interpreter?

At least as many as there are TODO app examples for web frameworks...


https://danthedev.com/2015/09/09/lisp-in-your-language

A Lisp implemented through JavaScript arrays.


I've made a Lisp that compiles to Ramda.js, the source code is simple, http://github.com/yosbelms/ramdascript


I think it should be mandatory for CS students to implement their own little Lisp using the building blocks McCarthy described!

That used to be part of the intro course at MIT and Berkeley, but they've since switched to Python: https://web.archive.org/web/20091209071455/http://danweinreb...


> Java and ist crappy OO

* There are those for whom writing an AST acting on lists of lists of lists comes naturally. They have Lisp. For us idiots who model things in terms of state and behavior, and who prefer to let the compiler do the hard work of figuring out an AST representing our ugly, step-by-step solution, there is "crappy OO".


While ML was the meta language for a theorem prover. Funny how these were side effects.


If there's one thing I've learned from programming it's that to build anything good you have to be driven by real use cases.


It's not clear if your comment approves or disapproves the one you've commented.


It's not clear to me whether the comment I replied to viewed this as a good or a bad thing (all it said was "funny how..."), so I can't say whether I approve or disapprove. Hopefully it's clear where I stand though.


True. The only clear thing is that nothing is clear in this particular discussion branch. I'm ok with this but can't extract any useful information.


I think there's information, we're just not taking sides for/against. Which is fine.


Just like nature.



ps: I didn't bother to check those repos. Very nice to see old lisp code. It's very very clean. I often fear messy lisp 1.5 style but that's quite pretty.


And [O]Caml was originally called "Le-ML" because it was written in Le-Lisp (and targeted the same VM, LLM3).


someone tweeted that dan ingalls first smalltalk was in BASIC, I always assumed it was in lisp (considering Kay mention lisp in influences and that lisp was teh PL clay)


https://en.wikipedia.org/wiki/L_Peter_Deutsch

He worked on both the Lisp and Smalltalk low-level implementations at Xerox PARC.


I have a deja vu, didn't we have that discussion before ?


The thing about the way ideas about programming are "sold" to other programmers, is that it has as much to do with the actual profession of programming as a typical tween's conception of being a "rockstar" has to do with the actual profession of being a touring musician. A lot of the really vital hard work is glossed over, and huge amounts of attention are paid to certain abstracted "sexy" ideas.

When people watch someone soldering, their attention is drawn to the iron, and to the shiny melted flowing metal. However, it's really cleaning the tip of the iron and having an iron that can provide enough power at the right temperature that matters.


When I was a kid they made us learn C and Lisp as part of Cognitive Science degree. I don't really use either language, unless you count C++. But I do feel that between those two languages you can understand two ideals really well. One is the idea of a clean symbolic expression, the other is the idea of a portable language that lets you get to the core of what the machine is really doing. Both are useful ways to think about programming.


Some years ago, Paul Graham wrote about there being too conceptually clean approaches to programming Languages, C and Lisp. The C family is far more popular, but the trend is to take C as your starting point, and add Lisp features to it.

Gosling said that Java drug the C++ crowd halfway to Lisp.


I wonder what aspects of Lisp he was referring to. The programming model of Java is basically C++ with garbage collection, minus a lot of stuff that makes C++ unsafe and hard to parse.


Good question. Perhaps because Java's OOP is closer to Smalltalk than Simula?

Here is the context:

http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...


And many of us don't really want to go back. :)


I wonder why lisp isn't as popular as say python for AI, ML, and stuff. I see these fields as having a strong academic tone, and it feels like racket or clojure could be bigger when it comes to that.


Lisp was way more popular for classic AI for many many years, and possibly still is.

For machine learning, you want very fast matrix operations for the actual training/evaluation parts and very good data munging for the "get the data from this file/API/database to your ML library". The Python ecosystem is strong at both, which sets up a good feedback loop for even better libraries to be built on top.

That's my take, anyway.


Back then there weren't really ML libraries to speak of, you had to write your own. Though Symbolics did have a neural network tool and framework they sold.

And you could certainly do things like shuffle lots of data easily for more hardcore processing. For example, Symbolics didn't just have a Weitek floating point accelerator option for their systems, they had vector processing boards for them too that you could use from Lisp. (And even a GPU, the Framethrower, that did full-frame HD with 2D and 3D acceleration! S-Graphics is amazing for the time.)

Also, a lot of AI work back then was focused on symbolic processing. That's largely been eclipsed by the more math-heavy approaches these days but symbolic processing is still around, for example in tools like Cyc.


Well, it was. Specifically, Common Lisp was. But that language's standard was etched in stone in 1994 whereas languages like Python (where most deep learning user-facing code is done) continue to evolve.

I think Python really took off for that because it already had quality and widely-used libraries for writing the code in Python and doing the work in a more efficient place (numpy, scipy). Clojure has one of those for matrix multiplication but not much else there, and I'm not sure Racket has anything at all.


> Well, it was. Specifically, Common Lisp was. But that language's standard was etched in stone in 1994 whereas languages like Python (where most deep learning user-facing code is done) continue to evolve.

This is an apples to oranges comparison. The Common Lisp standard hasn't been updated since 1994. The Python standard has not been written at all yet. Lisp and Python implementations both continue to evolve and be released.


To add to that, the transition from Python 2 to Python 3 is also an example of how not to evolve a language.


Racket doesn't have simple interfaces to things like BLAS, but it does have quite good natively written libraries for matrix operations and other math-related things: http://docs.racket-lang.org/math/


Clojure matrix libs:

    https://github.com/mikera/core.matrix
    https://data-sorcery.org/
    https://github.com/mikera/vectorz-clj
    http://neanderthal.uncomplicate.org/
    https://github.com/tel/clatrix


One factor is that you can't just go to "$language.org" and download the canonical version of the language. There are many different and somewhat incompatible versions of the language for various platforms and multiple decades of books written for the various stages of the evolution of each competing implementation.

Lisp is incredibly malleable and this may have hurt it over the years.


Most algorithms currently labeled machine learning are different in nature from original AI algorithms. Nowadays there is big emphasis in numerical algorithms, while in the past AI was about symbolic computation -- which is the biggest strength of Lisp. If you only use numerical algorithms, you can use Python or even FORTRAN.


But what makes Python any better at numerical algorithms than Common Lisp? It's not like CL is lacking in numeric support. It probably has superior numeric support than Python, actually.


Some/most Common Lisp implementation have better native support for numeric types than Python, but Python has a wealth of libraries like numpy that bind to optimised C/Fortran/assembly.


Exactly. Without numpy, Python is orders of magnitude slower at numeric computation than Common Lisp. With numpy, it's faster because numpy is basically C.


Numerical libraries in Python are not implemented in the language itself. In typical Python fashion, they're just calling C and FORTRAN libraries.


Because Common Lisp was the language for AI before the big AI-winter hit and it's now associated with approaches to AI that don't actually work.

Also a lot of the latest AI is hyper optimized data crunching on GPUs which isn't necessarily one of lisp's strengths.


Well, actually *Lisp was a thing.

I think it is also a matter of culture, probably if the likes of AMD and NVidia cared, they could invest some money into making such languages run properly on GPGPUs, instead of leaving it to researchers alone how to target PTX and ROCm.

On the other hand something like C++17 would already offer many of the Lisp benefits, even if a bit uglier.


Wouldn't it be harder to get Lisp to run in a GPU, though?

Or perhaps my point is, wouldn't it be harder to get old-style (symbolic) AI to run in a GPU than ML-style AI? (If I understand correctly, the old style is a lot of walking data structures, and the new style is largely matrix operations. The latter seems like a much better fit for a GPU than the former.)


Well, the Connection Machine was highly parallel.

Also the declarative way of programming in languages like Lisp, and also the macros, would surely allow for nice expressive DSLs.

So far I am only aware of companies exploring Haskell and F# support for GPUs, but I guess it is usually a matter of someone trying it out.

After all, there is FPGA tooling generation support for Clojure already,e.g. Piplin.



> even if a bit uglier

That was very kind.


Data crunching on GPUs is a LISP strength. It runs code orders of magnitude faster than say python or c++. https://youtu.be/bEOOYbscyTs?t=2151


There was no "AI winter"; that's a myth. Well, or least a big exaggeration when used to explain why certain things are the way they are.

The main force which explains everything is the procession whereby mainframes were replaced by minis, were replaced by workstations, were replaced by microcomputers.

At each stage, the new wave of hardware started small, bringing in its own approaches, tools and languages. As each wave matured, certain software technologies made "the jump". Some didn't. Some made the jump, but their popularity was destroyed. Those approaches which started each wave had a certain edge, even if they were inferior. For instance, the BASIC language was very widely available on the first 8 bit microcomputers that burst onto the scene in the late 1970's, putting computing into the hands of non-institutional users for the first time. In computing ivory towers of that time, nobody considered BASIC viable at that time any more. Yet, because of this boost that BASIC received, riding the microcomputer wave, it still persists with us in the form of Microsoft's VB.

Lisp was very successful in the 1980's. However, it didn't run very well on microcomputers. If Lisp hackers wanted to make an application to sell on the mass market, the faced the prospect of their customers having to buy expensive workstations and minicomputers. So they did the obvious thing and re-wrote the logic in C or what have you, making it workable on an 8 Mhz PC with a meg of RAM. Once someone has "made it" by doing such a thing, they stop learning. Twenty five years later they are still telling the same story to new recruits about how they rewrote some Lisp thing in C to make it actually run, and made a business out of it, hence forget Lisp.

By the time the next wave of hardware gets to the point that it exceeds the previous generation in capacity and power, it's too late to try to revive most of the stuff that didn't make the jump. The people moved on to something else (or have even become irrationally permanent naysayers), plus other things have obviously changed in the world.

Lisp is doing very well all things considering, because of great expressivity and abstraction, machine independence and overall enduring value. Also, its adaptability: the ability to be reshaped into new dialects. Nothing that old has anywhere near the clout.

As far as the AI winter goes, basically the spiel is that certain funding money dried up for certain types of AI. Even if that is true, what does it tell us? That certain people were dependent on that type of money. They were dependent on it because their stuff only ran on institutional hardware; they were not able to wean themselves off the institutional teat and do something in the mass market. At least, not without changing toolchains.


Imagine an alternative universe where AT&T was allowed to sell UNIX and charged the same price as other mainframe OSes, instead of a symbolic license price for universities.

I have a feeling that would have turned out quite different.


Before compute resources became essentially free, Lisp took up rather more memory on the then-constrained commodity hardware than other languages. I remember being in awe of one of our developers who had a whopping 96MB on his machine - this was in the mid 1990s.


I think Python gained its popularity when Google started using it. When one of the big companies - Google, MSFT, Apple, Amazon start using Clojure for their big projects - I believe it will become extremely popular.


It used to be the language for AI.


The "base rate" popularity of Python is much higher than all those others. This leads (via many mechanisms) to Python being more popular than these others.

(The reason it's Python and not another popular language is another matter - I think for various reasons Python was a more popular language for scientific computing).


Could you explain what you mean by the "base rate" popularity?


Pretty much what czinck said.

Python is much more popular in general, which makes it more likely to be popular in any specific subfield, for various reasons:

1. More chance that people who decide to do anything in ML already use Python.

2. There's better existing support for various ML-related tasks in Python.

3. There's a larger audience available for ML-related things in Python, therefore people think it's more worthwhile to code things in it.

etc.


I'm not the parent poster, but I'm pretty sure they're saying that because Python is more popular in general (the base rate) it's more popular in AI/ML/whatever circles because of better general support (more tutorials, more libraries, more people already know it before trying to use it for a specific problem).


I can't comment too much on this article, as I have a very, very limited view on LISP - basically just a couple of minor tutorials and one of the open-source interpreters. For me, it's always been one of those "I need to learn this" kind of languages, but I've never had a use case for it, and so it remains a curiosity to me more than anything.

I do know, though, that LISP allows the creation (or at least I have heard) for DSLs - so I am curious what people here think about this.

I'm also curious if anyone has an opinion on JetBrains MSL:

https://www.jetbrains.com/mps/

...and whether that would be a better thing to learn before or after learning LISP, as well as how it compares to LISP?

It's yet another "thing" that has caught my eye over the years, but again - no use case, and so it remains on the back burner for now...


Lisp is an interesting thing because once you learn it, you start seeing use cases for the ideas you've picked up during the process all over the place.


Does anybody have a few examples of DSLs people make in a lisp (ideally clojure because I have worked with it a tad)? I've seen plenty of cases where people make a pseudo-dsl via optional arguments, but not seen this so-oft mentioned "yeah we just wrote a dsl for it because lisp" sort of deal.


The TXR Lisp dialect provides an awk macro that implements a language closely resembling Awk in its structure and semantics.

For example, write the third field of every record (if the field exists) into `file.{recnum}`:

  (awk ([f 2] (-> `file.@nr` (prn [f 2]))))
The manual contains a translation of all of the Awk examples from the POSIX standard:

http://www.nongnu.org/txr/txr-manpage.html#N-03D16283

The (-> name form ...) syntax above is scoped to the surrounding awk macro. Like in Awk, the redirection is identified by string. If multiple such expressions appear with the same name, they denote the same stream (within the lexical scope of the awk macro instance to which they belong). These are implicitly kept in a hash table. When the macro terminates (normally or via non-local jump like an exception), these streams are all closed.



Serious question: Why are these considered "DSLs"? To me, these are just libraries with functions. Are they considered DSLs just because Lisps are basically an AST to begin with? Because when I think of a DSL, I think of something like SQL, or HTML, that is definitively different syntax, with it's own interpreter, for a specific purpose.

Function calls in Clojure doesn't match with the definition of what's in my head here. This is sort of what I was getting at in my original comment.


Here is an example of a DSL I made in Common Lisp for doing compile-time URL checking in a web application: http://carcaddar.blogspot.com/2008/11/compile-time-inter-app...

Common Lisp makes it easy to compose DSLs so it was trivial to apply the URL checking DSL to the JavaScript generating DSL (Parenscript) and have the browser-side code checked at compile time as well.

Another good example is CLiki2: https://github.com/vsedach/cliki2/blob/master/src/readtable....

The HTML template system is a small DSL on top of a string interpolation library (vs a 10,000 line templating library that can't even get HTML attributes right). It was also trivial to make it use streams to eliminate string allocation/copying.


Matthew Butterick (the author of Beautiful Racket linked in another reply) is solving this year's Advent of Code, all as DSLs: https://github.com/mbutterick/aoc-racket


That repository has last year's challenge and the six days I checked did not involve writing a DSL.


Check out the 2016 branch.


Maxima CAS is implemented as a DSL on top of Common Lisp. It is one of the best environments for symbolic math (other than commercial products such as Mathematica).


A significant portion of naughty dog games were written in a dialect of scheme that was implemented in CL. The cool feature being rlet, register let, that allowed programmers to write a section of code in lisp that referenced specific registers .



Hmm, I guess I have seen datascript / datalog. My main question is more about how many articles seem to suggest that it's basically par-for-the-course in lisp programming to just make a DSL in your projects, so was wondering when that might actually occur.


As mentioned in this list, Ihaka and Gentleman's R implementation of S begain like this but quickly grew into a larger system. R's source code today is still full of lispisms.


Question: what would a LISP dialect with static typing look like?

EDIT: Found an answer: http://stackoverflow.com/questions/3323549/is-a-statically-t...


I designed and built a statically-typed F-Expr LISP for my thesis, using some partial AST evaluation to trace and ensure every value had at least an initial value, and from that, type inference.

So it looked like:

    (define fib
      (lambda (n)
        (let loop ((a 0) (b 1) (n n))
          (if (= n 0) a
          (loop b (+ a b) (- n 1)))))
Which would do nothing by itself, and be eliminated as dead code unless called.

If we called it with:

    (fib 10)
It would expand, after the macro stage, to:

    (define fib
      ((Type/Number lambda) ((Type/Number n))
        (let loop (((Type/Number a) 0) ((Type/Number b) 1) (n n))
          (if (= n 0) a
          (loop b (+ a b) (- n 1)))))
If a value couldn't be inferred after ensuring the validity of the AST, it was supposed to error out with some helpful messages, but tracing the entire AST forward and back repeatedly always managed to type every value that was at least initialised, and if not, eliminate it as dead code.

Tradeoffs:

Compiling can be very lengthy, and it would be theoretically possible to write a program that would take ridiculous times to compile.

Once compiled, we can ensure type safety, and in the underlying implementation, JIT everything for a decent amount of speed.

Edit: Forgot to add lambda return type. Then added it in the wrong place.


Do you happen to have a link to your thesis?


Unfortunately not.

I had to surrender publishing rights, and the university only publishes about 10 submissions a yeaar... So unlikely to appear anytime soon.


> I had to surrender publishing rights

Even as a pdf on your own webpage? What kind of university would do that? Even the most abusive CS publishers have a more relaxed policy…


Well, the nearest competitor requires full copyright transfer, so there's that.

And CS publishing usually happens only within STEM, not as a standalone, within this circle of Universities.

No, not in America. There is far less interest locally in CS, and so a sort of mild tyranny rules in academia.


Common Lisp, Racket, and Clojure all have optional static typing.


They have gradual typing but I believe they're still enforced as runtime contracts, making them not static types.


In a Typed Racket program that does not import any untyped racket code, there is no runtime enforcement. Runtime contracts are only introduced at the boundary between typed and untyped code. This allows programs to be soundly transitioned from untyped to partially typed to fully typed.


That sounds pretty rad. Do you know what the status is of dependent types in Racket? I have always been interested in that.


We're working on it! See the work by my student, Andrew Kent, coming soon to Typed Racket: https://pnwamk.github.io/


Technically, the Common Lisp standard leaves it up to the implementation what to do with type declarations--in practice, I get compile time warnings for violating them (as well as for violating inferred types). Type declarations will not lead to runtime checks, you need to use CHECK-TYPE or similar for that.

Typed Racket is more extensive and gives errors at compile time. I don't know very much about how it works in Clojure.


In SBCL, type declarations are treated as assertions. If the compiler is sure the assertion is valid without doing runtime checks, it can remove them. The optimization levels are also taken into account (http://www.sbcl.org/manual/index.html#Handling-of-Types).




Dylan.


Since this takes so much from things Alan has said, I'd be interested in seeing what he thinks.

Alan, if you're there, would you care to comment?


Out of curiosity, what's the alternative to if/else? Assuming polymorphism wasn't around in the 50's, did people express the idea of conditional execution based on the result of evaluating some expression using and/or? Does this mean that lazy evaluation was around before conditionals?


I know their is an active community around Lisp and it's still used for development, but I apparently have not dug deep enough to appreciate when it's the best choice for a new project.

Can someone mantion a few features or scenarios that make it the best choice for starting a new project?


There's two places where I find LISPs useful.

* Where I might prototype something in Python usually, I can build the Scheme equivalent just as fast, and thanks to Gambit or Chicken's speed and static compiling, it can grow to be bigger than just a prototype with few tradeoffs.

* Anytime I need a DSL, if it isn't LISP, I find myself either disappointed by slowness, or fighting with the language. Example: HTML templates. (x-expr are great!)

In short, speed and flexibility.


This great idea of Lisp (the simple syntax of function calls in round brackets) isn't much different than a good macro assembler even back in the 1960's. The only major difference was that more than 1 function could be defined in 1 source code line. (I think that machine code is nothing but a sequence of function calls where the function is the logic encoded in the CPU itself for each opcode.) Is it fair to compare the complexity of expression evaluation etc (Fortran) with a macro assembler? Obviously any program can be coded in a macro assembler and therefore that would also be true from a syntax like Lisp.

When I was in my 20's, I programmed at least 100,000 lines of Z80 assembler for the first micro computers. One project was at least 40,000 lines and so I know how difficult it is to program larger assembler programs. The biggest problem is that it is hard to see the structure of the loops and conditionals that we normally indent in higher level languages. (You can indent a Lisp program in any way but the language doesn't require any at all.) It is also difficult to recognize expressions. Both of these problems are also there in Lisp (unlike most other high level languages).

One last point about the linked list structure at the heart of Lisp. Linked lists are poorly executed in modern computers that rely heavily on locality of data, to optimize the L1 cache. Lisp was very easy on the compiler/interpreter writer but wasn't very good at optimizing the readability of the code for the programmer. (I don't want a religious war but I will point out that most programmers have never programmed in Lisp even though it was one of the first computer languages created.) Before I get a lot of dissing comments, I think with practice, some programmers developed an eye for the lack of structural clues and made some reasonable size code. You could say the same about some programmers making quite good large scale programs in assembler but that doesn't mean that writing in assembler or Lisp should be encouraged.


You make some specific claims here that sound a little odd to this LISP and assembly language hacker.

Assembly language doesn't provide any datatypes. LISP does. Assembly language doesn't provide any type checking. LISP does. Assembly language doesn't provide automatic storage reclamation. LISP does. Assembly language doesn't provide naming. LISP does.

You also make a claim about L1 caches and locality of reference. Every LISP compiler writer, and every LISP garbage collector writer, knows about CDR-coding. We also know about how Cheney copying garbage collectors and their descendants like the Baker incremental collector compact data, precisely for locality of reference. The compiler writer of course is thinking about cache performance and how lines are mapped in particular target architectures.

You should probably educate yourself a little more about LISP if you are so interested in it as to make statements in a public forum.


He was talking about macro assemblers, which often do provide naming and some level of type checking (and pure assembly is arguably at a lower level of abstraction than type systems, with separate instructions and registers for integers, pointers, floating-point etc).

I agree it's a somewhat odd comparison.

There are of course plenty of obvious and non-obvious ways to optimize linked lists, but even still they have poor performance characteristics, space efficiency and cache locality compared to alternative structures like arrays and even immutable arrays.

There's a reason Java, C#, Python etc store strings as immutable arrays; not only do they start from a generally better performance baseline than lists, but they too have well-understood optimization characteristics.


>"every LISP garbage collector writer, knows about CDR-coding."

I thought CDR needed hardware support as opposed it being a general programming technique, is that not correct?

I've only ever read about it on FAQs like in the following:

http://www.cs.cmu.edu/Groups/AI/html/faqs/lang/lisp/part2/fa...


I was only talking about the 'list of function calls' aspect of assembler and Lisp, not the type system. I agree that Lisp has a type system and assembler doesn't. Forth is another language that also has very simple syntax that approximates the 'list of function calls' style that I would say isn't unlike a macro assembler either.

I am writing a new language with built-in garbage collection that I think is quite superior to other languages. I have created a full standard library with almost 1,000 built-in functions and none of my data structures (lists, maps, trees, stacks, indexes, tables etc) contain pointers or linked lists (that use pointers). I sold over 30,000 copies of a language/database system in 1987 so I think your last comment is quite inappropriate. I have know about Lisp since I started CS in University in 1975.

Linked lists are horrible data structures when being used as well as when being freed (your garbage collection comment). I use simple dynamic multi-typed arrays instead of linked lists (pointers) and they can be freed in 1 chunk or a bigger version can be freed with a few memory de-allocations. I get full cache locality and improved speed of allocation and de-allocation.

I would love to see an incremental GC that can copy all linked lists nodes into contiguous memory automatically. Nice trick if you can do it but that doesn't help you if your linked list doesn't cause a GC.


Big time array fan here. I'd love to check out your language when you publish it. Sooner the better.. we need new ideas! Email in profile if you'd like to chat about it.


> One last point about the linked list structure at the heart of Lisp. Linked lists are poorly executed in modern computers that rely heavily on locality of data, to optimize the L1 cache.

You could maybe say that linked lists are at the heart of the platonic ideal of Lisp. Naive interpreters might use linked lists in their internal representation. But real implementations of e.g. Common Lisp or Scheme provide a wide array of data structures (multi-dimensional arrays, hash tables, linked lists).


> This great idea of Lisp (the simple syntax of function calls in round brackets) isn't much different than a good macro assembler even back in the 1960's.

Far from it. Recursive functions and symbolic expressions were nothing like assembler back then.

> You can indent a Lisp program in any way but the language doesn't require any at all.

One can indent it in any way, but practically people use common indentation styles. Lisp also provides formatting&indentation via the pretty printer.

    CL-USER 48 > '(DEFUN COLLAPSE (L)  (COND 
    ((ATOM L) (CONS L NIL))
    ((NULL (CDR L))
    (COND ((ATOM (CAR L)) L)
    (T (COLLAPSE (CAR L))))) (T
    (APPEND (COLLAPSE (CAR L))
    (COLLAPSE (CDR L))))))


    (DEFUN COLLAPSE (L)
      (COND ((ATOM L) (CONS L NIL))
            ((NULL (CDR L))
             (COND ((ATOM (CAR L)) L) (T (COLLAPSE (CAR L)))))
            (T (APPEND (COLLAPSE (CAR L)) (COLLAPSE (CDR L))))))
> I think with practice, some programmers developed an eye for the lack of structural clues

Lisp has a lot of syntax and structural clues, but you just don't know them. You have to learn them, since much of, say, C syntax knowledge does not carry over to Lisp.

>and made some reasonable size code.

Like the 1 million lines of Lisp code of the Lisp Machine OS? Or the several hundred thousand lines of an editor largely written in Lisp (GNU Emacs)?

> Linked lists are poorly executed in modern computers that rely heavily on locality of data

Lisp nowadays usually does not execute linked lists, but runs compiled Lisp. Some data is in linked lists, but there are many other data structures not made of linked lists. Still the language runtime is often pointer heavy.


As someone that's also written hundreds of thousands of lines of assembly with macros I don't really see the comparison. Lisp macros are written in lisp itself. You can create complicated data structures, interate over loops, do file io, query databaes, access the network, whatever you want at compile time in lisp where as assembly language macros were never much more complicated than then the C preprocessor. I certaibly did tons of creative things with assembly language macros but they aren't remotely similar to lisp macros.

As for perf, 7 very popular and performant games were written in Lisp. Crash Bandicoot 1, 2, 3 as well as Jak and Daxter 1, 2, 3 and Racing.


I never said that Lisp macros were anything like assembly macros. I was referring to a macro assembly language as a list of opcodes (function calls) and macros. I am quite a fan of Lisp macros which are much more flexible as you can create code at compile time from a Lisp program.

I also know that Lisp or assembler can do anything/everything. I don't like polish notation or the lack of structure in Lisp programs. Everything is a function (with only a couple of exceptions).


Agreed about linked lists; that virtually all functional languages make them the default/literal data structure is IMO a poor practical design choice (no matter how theoretically elegant) and is the primary culprit for their reputation for slowness.

And the tendency for functional compile-to-JS langs to emulate linked lists in Javascript and keep them the default data structure is downright laughable.


(Singly linked) lists are a functional data structure. You can manipulate them efficiently without modifying the lists you started with. You can't do that easily with arrays. So if you base your language around arrays, you're better off making it imperative.

Lisp predated level 1 caches. I think it's better to design hardware around the software that runs on it (the Burroughs mainframe/Lisp machine approach), than design programming languages around the machines which run them (the C approach). In this case, it means finding a way to make non-local data references more efficient.

Lisp's original reputation for slowness predated the use of Level 1 caches, and was because many early implementations ran on interpreters.


I can 'efficiently modify the lists you started with' with a dynamic multi-type array and I have cache locality and random access. I have a single data structure with an 8 byte overhead, 2 byte overhead per node that can be used as a tuple array with direct lookup, a single linked list, double linked list, a queue, a stack, and a balanced binary search tree. This whole structure is allocated in a single contiguous memory chunk and is cache friendly. It can be serialized and un-serialized without any processing which would be required by a Lisp style linked list even if the list was just moved in memory, saved/restored to/from disk or transmitted to another computer.

Lisp DOES predate level 1 cache but we DO have level 1 cache now and it dwarfs all other optimizations on modern computers. Wouldn't it be nice if the hardware was designed to optimize our languages instead of the other way around but I live in this universe rather than an alternate one.

What matters is now, not decades ago.


I'd be interested to learn more about your language. Is there a web page about it, or a paper? Are you the D Clark who's at UCL?

The way things are now can be changed.

For most applications users care more about response times than CPU clock rates. Hardware has speeded up by several powers of ten (Moore's law) but software has at the same time slowed down (Wirth's law), resulting in little if any net gain, and that has nothing to do with the use of lists: most applications don't use them.


David Clark is like John Smith and no, I am not any famous person but I have been around the micro computer world for a very long time (1975).

My website is www.rccconsulting.com where you will find an essay on the data structure I mentioned here called SLIST and some documentation and high level design for the system/language I am currently developing called MAX.

Users do care more about response times (as you say) and instantaneous and 1/2 instantaneous are considered equal by end users. Most user developed code doesn't need optimization but the data structures that under pin that user code should be as performant as possible, just in case.

Designing a language like Lisp or a special purpose language/system like I am creating have design criteria that is much different from the application programmer that is just trying to make stuff work. I have completed at least 1,000 projects big and small as an application programmer, as well as all the system programming I have done so I think I have a good incite into both points of view.

>> use of lists; most applications don't use them

You may be correct but most programs manipulate variable length character strings whether that is formatting, creating HTML documents, creating JSON data etc. Most currently used programming languages don't do variable length strings either well or efficiently. I also don't like the fact that almost all useful data must be manipulated and stored in database systems that reside somewhere other than my computer. My new system was designed to help solve that problem and more.


There's already a language called Max: https://cycling74.com/products/max/#.WFRNp-wWVpg

Your SLIST sounds similar to data structures which go by various names in different languages, e.g. list in Python or ArrayList in Java. Common Lisp vectors support that functionality too.

Your VCHAR presumably works in a similar way? I agree ASCIZ is a very inefficient way of representing strings, unless they're guaranteed to be very short.


Max is the internal name I use for my system. The actual name will be decided when I publish the program.

At the top of my description of SLIST, I acknowledge that I may not be the first to define a simple structure like that but I can say that I never got the idea from somewhere else. These lists can be used as an autobalanced binary tree for small lookups? Are these other structures pointer free with next to no overhead even on a 64 bit compiler? If they can do all that my humble SLIST can do then I will have invented a very good data structure and there isn't anyone using linked lists with pointers, right?

Most languages have made strings read only so they aren't very useful for manipulating large and small amounts of variable text. Actually, my VCHAR struct just uses my buffers which are used whenever memory is needed. My language is object oriented but even more it is collection oriented so most objects don't allocate space just for themselves. When using my buffer routines, no buffer can ever use data that doesn't belong to it or overflow it's allocated space.

I have many local and global memory managers and my buffer structure keeps track of use, size and origin. Memory that is alloced globally is always on cache line boundaries and so I don't put headers on allocated data. All chunks of memory are tracked and can't get lost so I count on less memory allocations than most other languages (collections remember)

Compared to any alternative I have found (unicode etc), ascii is quite efficient. When I get around to adding any unicode support, it will be UTF8 so none of the current code needs replaced or will run slower than now.



This web page doesn't fit into the L1 cache, and your browser uses some horrible data structure for it, the individual pieces of which do not cache well. Why are you here at all?


I would argue that there's no such thing as a "functional data structure." If a data structure maps poorly to existing functional languages, that's a symptom of an insufficiently powerful and generic type system.

Say you have (pseudocode)

  Array(4 16 5 20)
What's its type?

  Array
  Array<Int>
  Array<4>
  Array<Int 4>
  Array(Int Int Int Int)
None of these lend themselves easily to paradigms other than imperative, true. But this is because they unnecessarily discard known information. The type of an array ought to be itself:

  assert Array(4 16 5 20) hasType Array(4 16 5 20)
    => Ok
  
  assert Array(4 16 5 20) hasType Array<Int>
    => Error("Expected Array<Int> but got Array(4 16 5 20). Hint: Typeclass Array<Int> cannot be directly instantiated")
  
  assert Array(4 16 5 20) hasTypeclass Array<Int>
    => Ok
Fully functional and referentially transparent.


And if you want a new array whose first element is increased by one, you either have to allocate space for the entire array, modify the array, or use a different data structure.


No you don't, you merely need each index into an array to be a monad over all possible values it can contain. An array is itself and modifications are (take as arguments and return) its indices.


> you merely need each index into an array to be a monad over all possible values it can contain

Now keep a straight face and say "L1 cache" once again.


And how is that implemented under the hood?


Same as in a higher-level statically typed imperative language: mutable arrays contained within larger dynamic buffers for amoritized constant-time append and prepend.


So it isn't really functional at all. It just hides the side effects behind a lot of complexity.


This is precisely what functional programming does. Purely functional abstractions are just that, abstractions.


Code readability depends mostly on giving variables and functions meaningful names, and function size/number of variables. That applies to all languages.

Code layout/indentation is also an issue, but there's a single standard for Lisp, which Emacs is aware of.


It's an issue for all languages which don't force layout/indentation. Why does Lisp gets criticized for parens when the C family has curly braces all over the place?


Moreover, people write Makefiles for C, containing notation like:

   $(patsubst %.foo,%.bar,$(foreach blah,$(VAR),\
                                    whatever $(blah)))
People who can say with a straight face they find Lisp parentheses confusing have GNU Makefiles full of this crap, well debugged and all.


There's a lot here, but this one jumped out at me:

> You can indent a Lisp program in any way but the language doesn't require any at all.

Off the top of my head, isn't this true of basically all languages? Except one, and it got a lot of criticism for it (Python).


After I wrote that statement I started thinking that C programs can be formatted so that they don't read very well also. In C, an 'if' is always an 'if' (unless you use the pre-processor to screw it up) but in Lisp an 'if' could be anything. I do like the idea of Lisp macros where you can run a Lisp program at compile time to generate the code that is then compiled inline.

What I should have said was that Lisp has no commands or structure that can't be changed. Some might argue that this is it's strength but I have seen hundreds of samples of Lisp and it looks very confusing. I don't normally program in C# or Java or many other languages but I can normally understand their code. (Lisp and functional languages are the most opaque to me)

My language uses a byte code interpreter and that byte code is in polish prefix notation just like Lisp. I wouldn't want to code in my byte code either (even if the byte codes were replaced with keywords instead of the binary code). Polish prefix notation is great for the compiler but not very good for people.


I haven't worked on a program that hijacked the core language functions in the Lisp I use (Clojure). So your concern seems odd to me, especially since you admit you can do the same thing in C. That said, I only use it in my spare time and not for work, so my exposure is limited.

Beyond that, your objections seem to be based upon familiarity. I had similar ones before I started using Clojure more often. I think the notation is just fine - it's not much different from imperative languages except they place the parenthesis in a different spot - I think it's more the nesting than the notation, which isn't a common tactic in imperative languages.

I still use imperative languages during my day job so I can decipher imperative code easier, but I am much better at deciphering functional code than I used to be.


I get an error when trying to redefine 'if' in SBCL, though you could probably do it through the abuse of packages. The C preprocessor does allow 'if' to be redefined, though. No one does that sort of thing outside of obfuscated C competitions, though, so it isn't really a problem in either language.

What you find easy to read depends on what you're used to reading.


> After I wrote that statement I started thinking

You will have a better time on HN if you swap these two more often than not.


Haskell and Elm use whitespace as well, but I don't recall if they enforce the indentation.


Fortran (at least the last version I used, which was F77) requires specific indenting.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: