I know you're being funny but it's part of the community culture to always link to prior discussions cause they might spark continuing discussion and extended context to the article.
I don't think Lisp is ugly or anything, but part of me wonders if, in languages with syntax, the lack of expressivity is a feature rather than a bug. Partially because wrong things look wrong, if the syntax is intelligently designed. The setx example he gave seemed perfect in that respect, while it's superficially pretty neat, the maintainer in me started to think about what it would be like to debug that, or all the horrible ways it could act slightly different than you'd expect. I kind of like that assignment in the languages I use is really boring for the most part. I dont think that setf example is a footgun per se, but, I would have to look at it more closely than in another language like python.
The best thing about a highly expressive language is that you can do anything you want to.
The worst thing about a highly expressive language is that you can do anything you want to.
Expressive power vs. intelligibility (for the sake of common convention and communication in a team setting) is a real tradeoff, and the "principle of least power" does partially explain the popularity of slightly less-powerful languages like Java, Python, and Go, where there is generally just one idiomatic pattern for accomplishing a given task.
That is cute, but Java, Python and Go all also have multiple, huge, commercial sponsors, and I think it's a mistake to dismiss this impact as less significant than the balance between expressive power and intelligibility.
The real problem is that even with Lisp's local maximum for expressive power, it is still just a local maximum: Trying to bolt array programming or type systems onto Lisp ends up with something that doesn't feel much like lisp anymore and certainly doesn't feel like APL or an ML. Is there another maximum? I think this is still worth pursuing, but it doesn't seem to be something that a company even as big as Google or Facebook or Oracle can put their back into.
What's important to Facebook and Oracle and Google? Popularity. And "intelligibility" as a social construct definitely seems to be related to popularity, but I've never been convinced to believe you must sacrifice intelligibility to get expressive power, or that we can't make programmers any smarter, only that we haven't figured out how to do either yet.
I have done some Real Work ™ in Lisp, but dear deity, Forth... that to me really is you can do anything you want to - in such a scary way I have never worked up the courage to actually learn it.
If you go full Forth you start with writing your own interpreter, so what you are scared of is really your own shadow.
And that's one of the qualities of Forth: to stop worrying, stop being scared of yourself, stop being scared of the future... Or more precisely it learns you to fear the right thing: namely the Great Evil of Accidental Complexity, which often disguise itself as necessary evil.
Lisp languages give every programmer the power to be a language designer.
Unfortunately most people are not meant to be. Most devs are not even very good at coding, and hence PR reviews, style guidelines, design recommendations, linters, etc.
Humans are very flawed creatures.
Now add to that:
- if a feature exists, it will be used.
- at work, you don't have unlimited resources, you have constraints, and you will take shortcuts.
And now you have a good picture of why, yes, lack of expressiveness is a something that you might want.
> the maintainer in me started to think about what it would be like to debug that
That's a valid question. Lisp macros open up a new dimension programming and now we have the problem to debug/maintain code with macros. The SETF machinery in one way tries to simplify code (the code then has one style of assignments, not different ways and one does not need to know a special accessor - just wrap a form which looks like it GETs the value and Lisp will figure out how to SET the value). OTOH the SETF machinery is non-trivial. It's usually quite usable, but some of the details are complex - especially since it is user-extensible.
It's definitely something to look for: programmers in a Lisp project might need some extra education when it comes to macro programming, to make sure, that they have a good understanding of the basic mechanisms. Also it always makes sense in a team to have more than one person looking at a macro implementation.
The core ideas are timeless really, but the examples might be a bit more relatable today.
I'm reproducing the ToC here:
1. Does syntax matter?
2. The ingredients of Clojure's syntax
2.1 Data literals
2.2 Macros
3. Consequences
3.1 Verbosity is a solved problem
3.2 Separation of concerns: code layout ⊥ program structure
3.2.1 Example: the Builder Pattern
3.3 Code = Data = Data Viz
3.4 Tooling as libraries
3.5 An 'all-tracks' language: embedding paradigms
3.5.1 Example: Web UIs
3.6 Saner language stewardship
4. Summary
Note that 0x3D is the "=" character in ASCII, so "=3D" in QP is "=" in ASCII. :)
This email has probably been through a few conversions to QP and back again between different email clients. Perhaps some buggy client got confused between an ASCII "=" and a QP escape sequence or something like that.
Basically: it's a form of escaping, and like other forms of escaping the escape character itself has to be represented as an escape sequence.
So in other languages where \ is the escape character, you have to put \\ to represent an actual slash rather than the start of an escape sequence. Here = is the escape character followed by the hex of the character, so to represent a literal = it needs its own hex value 3D after it.
Being able to write new control structures is something I miss in most languages. Lisp macros make it much easier to do that. Tcl uplevel and upvar are another way to approach it.
A good example would be a case switch, that ran the first block that evaluated to true (lisp has this, many languages don't). That being said, custom control structures can abstract away some things.
Delimited gotos. Many lacks proper looping constructs. CL has a package a macro that gives you a limited version delimited continuations which are handy for some situations.
I think they meant squaring the fraction of good macros: 0.1^2 = 0.01, so 99% are crap. As someone who likes and uses cpp quite a bit, I think, if that isn't true anymore, it's only because most of the people who would have written crap macros have moved to writing python or javascript or whatever instead of C or C++.
What curious newbies might miss from discussions of macros is that they are a separate language operating at compile time. They are usually advertised as code operating on itself or something like that. They're not. Powerful, yes, but not that cool.
Your comment is true about most languages, but false about Lisp.
Lisp has no compile time vs run time distinction. It only has lists. What is done with a list depends on what the first thing in the list is determined to be:
1. Function - evaluate the arguments then pass it to the function.
2. Macro - manipulate the list according to the macro and then get another list.
3. Special Form - follow the special rules for the special form. (The list of special forms varies from Lisp to Lisp, and are the elementary building blocks from which the language was built.)
In Lisp, macros are advertised as code operating on itself, because that is exactly what they are. And yes, they are exactly that cool.
A cool example of this was NASA's Deep Space 1 probe. The control system was written in Lisp. There was a race condition in the code that wasn't detected during ground testing. NASA software engineers were able to remotely debug and patch the code while it was still running.
Macros are fully expanded before the rest of the code is executed. Trying to argue that Lisp doesn't have a distinction between compile time and run time because you usually have the compiler (or at least interpreter) available at runtime strikes me as just playing semantics games.
Do you have a citation for this? I was under the impression it was entirely possible for lisp macros to be expanded at runtime. For example, you could have data that is passed to a function and then used as code (because code and data are the same thing). The system doesn't know it's going to be used as code until it is, so the macros can't be expanded until then.
If Lisp uses an interpreter (which runs Lisp source code), you may see macro expansions at runtime.
For compiled code, there usually will be no macro expansion at runtime. Lisp will not macro expand code in compiled code, since all macros need to be expandable at compile-time.
But: if you explicitly want it, you can generate code at runtime and then you need to call EVAL or COMPILE. Then macro expansions might happen.
I think this is why it's non-trivial to build a distributable exe of the program, right? The exe would need a lisp interpreter/compiler bundled with it, because at runtime it might generate some code and run it!
"Most Common Lisp implementations support creating native-format executable files.
The typical route involves loading all of the program's code into the Lisp environment, dumping the current state of the Lisp process into a disk image, and identifying a start function (ie main).
Since the disk image contains the entire Common Lisp system (compiler, debugger, documentation strings, etc.), the executable will be typically larger than a compiled C program (even if you statically link the C code).
Another form of disk image requires a copy of the particular Lisp implementation to be installed on the recipient's machine. This is no different in principle from a C application requiring platform-specific dynamic libraries to run. This complicates deployment - the disk image is tied to a particular implementation version, and you typically end up having to include the Lisp implementation executable anyway.
Due to platform limitations, some Lisp implementations can't produce native executables (for example, ABCL can only make JAR files). Other implementations are only able to dump disk images that require a separate copy of the implementation executable. An alternative to executables is available on Unix platforms in the form of shell scripting."
If it needs 4 wordy paragraphs and you still don't know how to do it, I am going to scribe that up as non-trivial.
Sansext is a custom function that strips the file extension from a filename, but you don't need that; you can read output filename from $2, or whatever else.
Nice. I think it is important for proponents of languages to answer FAQs like this so you aren't left in a corner when you want do so something (i.e build & distribute) that is standard in most languages.
From http://www.paulgraham.com/icad.html: There is no real distinction between read-time, compile-time, and runtime. You can compile or run code while reading, read or run code while compiling, and read or compile code at runtime.
There is a distinction. I can compile code while reading, but when I'm compiling, then I'm in compile-time.
In a compiled implementation, code needs to be compiled before it gets run. When I want it to be compiled, I have to call functions like COMPILE or COMPILE-FILE.
There is a conceptual distinction between read time, compile time, and run time, but it's only conceptual: all these things happen in a single process in an interleaved manner. When you type an expression into the REPL, first it gets read, then it gets compiled to some extent that depends on the implementation, then it gets run. Some implementations run every expression you type through the full native-code compiler, but others just interpret the expression, expanding any macros as they encounter them. In the latter case, it's switching back and forth between "run time" and "compile time" potentially many times during the interpretation of a single expression.
It is certainly true that macros are a different kind of code from most ordinary code, because what they're doing is different. But importantly, they are written in the same Turing-complete language. I once wrote a program (a C implementation for Lisp Machines) that allowed the user to interactively execute C expressions. Obviously, it had to parse the C code according to the syntax of that language, but the parser produced list structure that consisted entirely of macro calls. The guts of the compiler were implemented as a collection of macros that translated the output of the parser into Lisp Machine Lisp. These macros did things like accessing and updating a symbol table — far beyond what you could do in a C macro.
I can compile a file to machine code and load it into another process. The file compiler provides the 'compile-time'. The other process then provides 'load-time' and 'run-time'.
If anyone's still reading, I can't resist adding that I was doing type checking by macro expansion — a technique that was the subject of a 2017 POPL paper [0] — in 1983.
There are also systems like Forths, where the compiler and interpreter really are the same system, in two different modes of operation. In these systems, there is only runtime; compiletime is a specific style of runtime.
In Forth, the interpreter reads tokens. By default, each token is read and then executed. There is a token that switches the interpreter into "compiling" mode. When in compiling mode, each token is read and then used to instruct the compiler to emit code representing the token. There is a traditional and beautiful interplay [0] between the Forth interpreter and its initial stream of tokens, as the tokens customize the interpreter and compiler by augmenting their behaviors.
The main contrast that I would draw between Forths and Lisps is syntax. Forths don't really have syntax; they have token-parsing streams. Lisps are extremely tree-oriented, but Forths are stack-oriented.
Forth has two singleton stacks, and each has a DSL. Sometimes I wonder what Forth would be like with stacks as a first-class type. Maybe actors each with their own stack …
This is one of the big advantages of Lisp macros over C macros. In Lisp you write macros using Lisp, including normal user-defined functions. In C, on the other hand, you write macros using "C Preprocessor directives"
Yes, of course, I get that it has the same syntax, which is not the case with C macros and which makes macros way better. Thing is, there is one Lisp that executed at compile time which expands the macros, and then there is the resulting code which executed at runtime. It's not self-modifying code. It's just code that operates on some other code, written in the same syntax.
When running in the REPL (the most common way to interface with Lisp) the compile time and runtime Lisps are the same.
For example, here's a copy/paste from a REPL session that defines a macro that defines a function. The macro (at "compile time") prints information about the function (just its argument count) to stdout before using defun (itself a macro) to actually define the function.
Next I call the new function and print out a disassembly, just to show the function is in fact compiled
Whatever artificial division you believe exists between "compile time" and "run time" in Lisp is almost certainly your misunderstanding, and does not reflect how the language actually works.
Perhaps I could use some clarity on this matter too, and hopefully with a bit of gentleness. As I understand it, some Lisps do have an explicit compile phase and there is discussion among the language users over the runtime cost of macro abstractions.
I understand there may be some philosophical or theoretical interest in framing Lisp the way you do, but does that invalidate the close-to-the-app thinking about the costs of macros and where those costs are distributed?
I'm not so familiar with scheme, but I can perhaps give useful answer regarding how this works with SBCL (Probably the most well known common lisp implementation).
In SBCL, we would commonly say that code is "evaluated" rather than compiled or interpreted. This is because "compiler" is referring specifically to when assembly is emitted and "interpreter" is referring to when code is being used to call already compiled code. In the case of SBCL, it is doing both of these things simultaneously when it is reading a ".lisp" file. For example, if this is the contents of my .lisp file:
(defun boop () (print "hello"))
(boop) ;; prints hello
(defun scoop () (print "goodbye"))
The boop function is being evaluated (and therefore compiled), and then called on the next line. In traditional algol based languages like c and java with a separate compilation phase, it would not be possible to call boop before scoop has been turned into machine code.
So you are asking how the costs of evaluating lisp are distributed?
For one, the assembler generated whilst evaluating the .lisp file can be cached in a .fasl file so that it does not need to be evaluated again to be loaded into other projects.
Another is that macro's can be evaluated once during function definition. e.g.
(foo) ; will print "kellog" because funky was evaluated before moob was redefined.
In this case, the cost of calling moob is trivial because it only occurs once during the definition of foo. It will slow down the initial load of your program, but no subsequent calls to foo.
I wouldn't say they are a seperate language, but they definitely feel like a seperate layer. They're not first class objects like functions (so you can't pass them to functions) and you can't evaluate the arguments (since macros are only directly present during compilation/interpretation), it's purely a mapping from source code to source code.
For example, you can't rewrite a list based on it's values or form. You can rewrite the list to a program that does the transformation when run, but the seperation feels very tangible to me. Something like FEXPRs would be closer to being "just the same as normal code", but there's good reason why macros are the way they are.
I don't know what your threshold for "cool" is, but an example of something that's straightforward in Lisp is if you want to generate a hierarchy of class declarations from a schema, so the classes have certain instance variables and methods defined in some regular way; for instance, you might want to auto-generate a method for each class that returns a collection of the values of certain of its instance variables. In most any language other than Lisp, you'd have to define your own schema syntax and write a parser for it and a generator that outputs the declarations in the target language. In a nutshell, it would probably be painful enough that you wouldn't bother; you'd just type in all the boilerplate manually. In Lisp, not only is writing such a schema translator relatively straightforward, but you get incremental recompilation for free.
2014: https://news.ycombinator.com/item?id=8698074
2012: https://news.ycombinator.com/item?id=4246143
2009: https://news.ycombinator.com/item?id=795344