Hacker News new | past | comments | ask | show | jobs | submit login
Hofstadter on Lisp (1983) (gist.github.com)
373 points by Eric_WVGG 71 days ago | hide | past | favorite | 272 comments



In case anyone else is confused by what the functions named "oval" and "snot" mean in the following example:

  > (cond ((eq (oval pi) pie) (oval (snot pie pi)))
  (t (eval (snoc (rac pi) pi))))
I realised after a few seconds that they are meant to be "eval" and "snoc" instead. The above code should be written as the following instead:

  (cond ((eq (eval pi) pie)
         (eval (snoc pie pi)))
        (t (eval (snoc (rac pi) pi))))
This article has been a fascinating read, by the way. Kudos to the maintainer of the Gist post. I am also sharing these corrections as comments on the Gist post.

EDIT #1: Downloaded a copy of the original Scientific American article from https://www.jstor.org/stable/24968822 and confirmed that indeed the functions "oval" and "snot" are misspellings of "eval" and "snoc".

EDIT #2: Fixed typo in this comment highlighted by @fuzztester below.


>confirmed that indeed the functions "oval" and "snot" are misspellings of "eval" and "snot".

Correction of your correction:

confirmed that indeed the functions "oval" and "snot" are misspellings of "eval" and "snoc".

And I guess snoc is cons reversed and rac is car reversed.


> Correction of your correction

Thanks! Fixed.

> And I guess snoc is cons reversed and rac is car reversed.

Indeed! That's exactly how those functions are introduced in the article. Quoting from the article:

> The functions rdc and snoc are analogous to cdr and cons, only backwards.



OCR maybe?


maybe LLM "reformat/rewrite this"?


This article simply reinforces that the primary problem with the popularity of Lisp was people explaining Lisp.

This article, like every other Lisp article, tells pre-teen me nothing that he could use. Nobody ever demonstrated how much easier task X is in Lisp over asm/C/Pascal/etc.

By contrast, current me could have told pre-teen me "Hey, that spell checker that took you 7 months to write in assembly? Yeah, it's damn near trivial in Lisp on a microcomputer with bank switched memory that nobody every knew how to utilize (it makes garbage collection completely deterministic even on a woefully underpowered CPU). Watch."

I want to weep over the time I wasted doing programming with the equivalent of tweezers, rice grains and glue because every Lisp article and textbook repeated the same worn out lists, recursion and AI crap without ever demonstrating how to do anything useful.


Common Lisp: A Gentle Introduction to Symbolic Computation might be useful for the context you are describing. https://www.cs.cmu.edu/~dst/LispBook/


Practical Common Lisp https://gigamonkeys.com/book/


Didn't exist back then. Likewise SICP first edition was 1996.

I did have a copy of "LISP: A Gentle Introduction to Symbolic Computation" by Touretzky in 1986. It wasn't really that much better than any of the articles. It never explained why using Lisp would be so much easier than anything else even for simple programming tasks.

Had some of the Lisp hackers deigned to do stuff on the piddly little micros and write it up, things would look a whole lot different today.

Maybe there was a magazine somewhere doing cool stuff with Lisp on micros in the 1980-1988 time frame, but I never found it.


Generally I agree with what you are saying. I live outside the US and so this stuff from the end 70s / early 80s was very remote. Gladly we had then a well connected university where I got in contact with some of the more interesting stuff mid 80s.

The book I found most useful in the early times as an introduction to Lisp and programming with it was LISP from Winston & Horn. The first edition was from 1981 and the second edition from 1984. I especially liked the third edition.

https://en.wikipedia.org/wiki/Lisp_(book)

Lisp on microcomputers in the early 80s was mostly not useful - that was my impression. I saw a Lisp for the Apple II, but that was very barebones. Next was Cambridge Lisp (a version of Standard Lisp) on the Atari ST. That was more complete but programming with it was a pain. Still, I found the idea of a very dynamic&expressive programming language and its interactive development style very interesting. The first useful implementations on smaller computers I saw were MacScheme and Coral Lisp, both for the Macintosh, Mid 80s...

There were articles about Lisp in the Byte magazine early on, but having access to the software mentioned was difficult.

The early use cases one heard of were: computer science education, functional programming, generally experimenting with new ideas of writing software, natural language processing, symbolic mathematics, ... This was nothing which would be more attractive to a wider audience. David Betz Xlisp later made Lisp more accessible. Which was then used in AutoCAD as an extension language: AutoLisp.

Luckily I had starting mid 80s access at the university to the incoming stream of new research reports and there were reports about various Lisp related projects, theses, etc.


The first edition of SICP came out in the fall of 1984 (a year after these Hofstadter columns). This fall is the 40th anniversary!


I stand corrected on that. Thanks.


Hofstadter's followup article had a more interesting example.

When I first got a Byte magazine as a pre-teen, one of the articles was Lisp code for symbolic differentiation and algebraic simplification. I really couldn't follow it but felt there was something intriguing there. Certainly it wouldn't have been easier in Basic.

(Byte September 1981, AI theme issue. Later I was able to tell the code was not so hot...)

I didn't really get into Lisp until the late 80s with XLisp on a PC, and SICP. Worth the wait!


I just love his writing so much -- he captures what I felt when I discovered Lisp. As a kid learning programming in the 80s, I had already done some BASIC, Fortran, Pascal and COBOL in high school and early college. There were differences, of course, but they had some fundamental commonality.

At UC Berkeley, however, the first computer science class was taught in Scheme (a dialect of Lisp)...and it absolutely blew me away. Hofstadter is right: it feels the closest to math (reminding me a ton of my math theory classes). It was the first beautiful language I discovered.

(edit: I forgot to paste in the quote I loved!)

"...Lisp and Algol, are built around a kernel that seems as natural as a branch of mathematics. The kernel of Lisp has a crystalline purity that not only appeals to the esthetic sense, but also makes Lisp a far more flexible language than most others."


Have you tried Haskell? It feels much closer to math to me. Definitions, not procedures. It even looks like math.


Maybe Haskell is more like Bourbaki math, whereas Lisp is more like Russian style maths (ala Vladimir Arnold). I prefer the latter tbh, and I come to programming from a maths background. We are all different. Lisp to me is yet to be surpassed in terms of ergonomics when transfering my thoughts into computer code.


Interesting. How would you characterise each (Bourbaki and Russian-style)?


I'm not sure what you mean by characterize. Bourbaki-style is extremely rigorous, to the point of missing the forest for the trees. The so-called Russian style (there are plenty of non-Russian examples) is more driven toward building intuition and getting to the essence of the matter. In this way lisp is more similar to the latter because it facilitates prototype (essence) development. In Haskell you pretty much have to do a captcha equivalent of programming just to prove to the compiler you are allowed to do io :)


That is a very interesting perspective of Haskell vs. Lisp., I don't come to programming from a math background, but I am Russian. Maybe that's why I always preferred Lisp-style instead of Haskell :)


Maybe, I also see lisp as the slavic language of programming :)


Most obviously (from the linguistic's point) Lisp is Latin of programming languages. Has the similar historical importance; similar foundational role; continued relevance, elegance and power; evokes similar reactions from neophytes.

From the point of Biology: Lisp is a prokaryotic cell - simple, fundamental, highly adaptable.

In Chemistry: Lisp is carbon - versatile, forms the basis of complex structures.

In Geology: Lisp is like bedrock - foundational and supporting diverse structures above it.

In Astronomy: Lisp is a primordial star - ancient, influential, contributing to the formation of newer elements.

In Physics: Lisp is a quark - the basis of all baryonic matter.

</nerd-rant>


The difference is that it's not the case that a majority of chemists are ignorant about carbon, or geologists about bedrock, or astronomers about primordial stars or physicists about quarks.

Lisp is like an entire branch of computer science, about which a lot of people in computer science are ignorant.


Allow me to gently disagree. Most computer scientists I know are not ignorant about Lisp. Some scholars consider computer science a branch of mathematics, while others avoid such broad generalizations, as modern computer science has evolved into a broader discipline.

It's just that the majority of modern programmers are not concerned with mathematics, and that's perfectly acceptable. Mathematics itself has so many different levels that even mathematicians themselves are not always certain if they are indeed practicing mathematics.

You may be conflating programmers and computer scientists, but this could also be a perfect case of selection bias, where both of us are simultaneously correct and incorrect in our assertions.


I would say in physics, lisp is like the formulation of physical laws in terms of lagrangians and hamiltonians - ie it is the least action principle


Here's a hard one. In political science Lisp would be like...?


Juche ?


I mean what it usually means: to list distinguishing features, or at least give (necessary and sufficient?) criteria for membership of some class.

Whilst I'm vaguely familiar with Bourbaki and how it strongly influenced the way mathematics is written today, I hadn't come across that dichotomy before. Your answer was what I was looking for!


No! After about 10 years of writing software professionally, I moved over to product management, and my time spent coding decreased drastically (in the last 15 years, only some Python to show my kids a thing or two).

But I'd love to try! Maybe I'll take an online class for fun.


I can't recommend it highly enough. You're already familiar with laziness from Lisp, but purity is another head-trip. It made me a better programmer in any language, and even a better software architect before I've written a line of code.

And algebraic data types make it possible to make your code conform to reality in ways that classes can't. Once you're exposed to them, it's very much like learning about addition after having been able to multiply for your whole life. (In fact that's more than a metaphor -- it's what's happening, in a category theoretic sense.)

Haskell has other cool stuff too -- lenses, effect systems, recursion schemes, searching for functions based on their type signatures, really it's a very long list -- but I think laziness, purity and ADTs are the ones that really changed my brain for the better.


Have you tried Coalton? It's a Common Lisp library that adds Haskell-esque (or near-Haskell) type wonders, and which smoothly interoperates with your Common Lisp code.

Your comment is great though, consider me convinced. I've done a bit of messing with Lisp, but really would like to try write something in Haskell, or slog through a book or two, some day.


Damn that was a really good pitch. I think I’m too dumb to learn Haskell though lol. I’m struggling enough with immutability in clojure!!


As someone with some experience in Haskell (although not an expert by any means): Haskell and some of its concepts are foreign to many people, but I think that it is actually easier to program in Haskell than in many other languages I know. At least for my ADHD brain ;)

This impression can be changed somehow by the fact that Haskell and its community has two faces: There is the friendly, "stuff-just-works" and "oh-nice-look-at-these-easy-to-understand-and-usefull-abstractions" pragmatic Haskell that uses the vanilla Language without many extensions, and being written by people that solve some real-world problem by programming.

Then there is the hardcore academic crowd - in my experience, very friendly, but heavily into mathematics, types and program language theory. They make use of the fact that Haskell is also a research language with many extensions that are someones PhD thesis. Which might also be the only documentation for that particular extension if you are unlucky. However, you can always ask - the community is rather on the side of oversharing information than the opposite.

Rust fills that gaping hole in my heart that Haskell opened a bit - not completely, but when it comes to $dayjob type of work, it feels somewhat similar (fight the compiler, but "when it compiles, it runs").


> I think I’m too dumb to learn Haskell though lol.

I felt the same way, a lot of people feel that way.

This is in part because FP is difficult, typed FP is difficult, and Haskell is difficult. All by themselves. They do get easier once you intuit more and more FP in general I'd say.

Then there's also a phenomena described in the Haskell Pyramid[0] where it sometimes appears more difficult than it really is.

Like a lot of things, actually building something gets you a long way, esp. with the advent of chat AIs as it's comparatively easy to go back an fourth and learn little by little.

[0] https://patrickmn.com/software/the-haskell-pyramid/


Haskell makes it easier, because immutability comes more natural there.


I should have listed immutability as another thing that changed my brain for the better.


Do you ever code just for fun?


Personal anecdote: I got a lot more out of lisp that stuck with me than Haskell. Occasionally I say "oh this is a monad" or think about a type signature, but that's about it.


At the risk of diverging off from the original post, I also think that calling it "math" might make things a bit murky (and this is coming from someone who wanted to be algebraic topologist!)

It _is_ an elegant and minimal expression of a style of programming that is ubiquitous among dynamically-typed, garbage-collected languages. And it's a "theory" in the sense that it seems complete, and that you can think of ways to solve problems into Scheme and translate that into other dynamically-typed languages and still end with an elegant solution. Emphasis on the elegant (since minimal, wart-free, consistent and orthogonal, etc.).

Scheme was a simplification and a "cleaning up" compared to conventional Lisps of the time (lexical scoping, single shared namespace for functions and variables etc.)


40 years ago, and 20 years into the field:

> February, 1983

> IN previous columns I have written quite often about the field of artificial intelligence - the search for ways to program computers so that they might come to behave with flexibility, common sense, insight, creativity, self awareness, humor, and so on.

This is very amusing to me because it reads like a list of things LLMs truly stink at. Though at least they finally represent some nonzero amount of movement in that direction.


You must interact with more interesting people than I because to me LLMs have demonstrated as much "common sense, insight, creativity, self awareness, humor" as the average person I run into (actually maybe more but that makes me sound crazy to myself).


Eliza effect is alive and well, also.


Way back someone observed that the problem was not computers thinking like people, but people thinking like computers.

I believe we've been getting a bit of the latter.


His research group has a long history of trying to tackle these problems. Some interesting reading , even if much of it hasn’t (yet?) panned out .


This article, and the two companion articles it mentions, can be found in the book "Metamagical Themas" [0] in chapters 17-19, as well as all of his other articles that appeared in this series of Scientific American.

[0]: https://www.goodreads.com/book/show/181239.Metamagical_Thema...

(the book's title is the article series, which originated as an anagram of the article series that Martin Gardner authored, "Mathematical Games," also published in Scientific American and which Hofstadter then took over)


I love this book. Highly recommended for all lovers of Godel Escher Bach (his classic).


This book is indeed very beautiful to read and look at, it has a lot of fascinating illustrations.


> Attempting to take the car or cdr of nil causes (or should cause) the Lisp genie to cough out an error message, just as attempting to divide by zero should evoke an error message.

Interestingly, this is no longer the case. Modern Lisps now evaluate (car nil) and (cdr nil) to nil. In the original Lisp defined by John McCarthy, indeed CAR and CDR were undefined for NIL. Quoting from <https://dl.acm.org/doi/pdf/10.1145/367177.367199>:

> Here NIL is an atomic symbol used to terminate lists.

> car [x] is defined if and only if x is not atomic.

> cdr [x] is also defined when x is not atomic.

However, both Common Lisp and Emacs Lisp define (car nil) and (cdr nil) to be nil. Quoting from <https://www.lispworks.com/documentation/HyperSpec/Body/f_car...>:

> If x is a cons, car returns the car of that cons. If x is nil, car returns nil.

> If x is a cons, cdr returns the cdr of that cons. If x is nil, cdr returns nil.

Also, quoting from <https://www.gnu.org/software/emacs/manual/html_node/elisp/Li...>:

> Function: car cons-cell ... As a special case, if cons-cell is nil, this function returns nil. Therefore, any list is a valid argument. An error is signaled if the argument is not a cons cell or nil.

> Function: cdr cons-cell ... As a special case, if cons-cell is nil, this function returns nil; therefore, any list is a valid argument. An error is signaled if the argument is not a cons cell or nil.


I was curious what it is like on Maclisp. Here is a complete telnet session with Lars Brinkhoff's public ITS:

  $ telnet its.pdp10.se 10003
  Trying 88.99.191.74...
  Connected to pdp10.se.
  Escape character is '^]'.


  Connected to the KA-10 simulator MTY device, line 0

  ^Z
  TT ITS.1652. DDT.1548.
  TTY 21
  3. Lusers, Fair Share = 99%
  Welcome to ITS!

  For brief information, type ?
  For a list of colon commands, type :? and press Enter.
  For the full info system, type :INFO and Enter.

  Happy hacking!
  :LOGIN SUSAM
  TT: SUSAM; SUSAM MAIL - NON-EXISTENT DIRECTORY
  :LISP

  LISP 2156
  Alloc? n


  *
  (status lispversion)
  /2156
  (car nil)
  NIL
  (cdr nil)
  NIL
  ^Z
  50107)   XCT 11   :LOGOUT

  TT ITS 1652  Console 21 Free. 19:55:07
  ^]
  telnet> ^D Connection closed.
  $


I recall reading that in early versions of Maclisp, taking the CAR or CDR of NIL worked differently: Taking its CAR would signal an error as you would expect, however taking its CDR would return the symbol plist of NIL, as internally the operation of CDR on the location of a symbol would access its plist, and that's how it was commonly done before there was a specific form for it (and it actually still worked that way into Lisp Machine Lisp, provided you took the CDR of the locative of a symbol).

Apparently the behaviour of the CAR and CDR of NIL being NIL was from Interlisp, and it wasn't until the designers of Maclisp and Interlisp met to exchange ideas that they decided to adopt that behaviour (it was also ostensibly one of the very few things they actually ended up agreeing on). The reason they chose it was because they figured operations like CADR and such would be more correct if they simply returned NIL if that part of the list didn't exist rather than returning an error, otherwise you had to check each cons of the list every time. (If somebody can find the source for this, please link it!)


But of course cadr still has to check each access, to see if its of type (or cons null). So I don't see what was saved.


What's saved is that your code just calls that function, rather than open-coding that check.

Suppose you wrote this code in more than two or three places:

  (if (and (consp x) (consp (cdr x))
    (car (cdr x)))
you might define a function for that. Since there is cadr, you don't have to.

Also, that function may be more efficient, especially if our compiler doesn't have good CSE. Even if x is just a local variable, there is the issue that (cdr x) is called twice. A clever compiler will recognize that the value of x has not changed, and generate only one access to the cdr.

The function can be coded to do that even in the absence of such a compiler.

(That is realistic; in the early lifecycle of a language, the quality of library functions can easily outpace the quality of compiler code generation, because the library writers use efficient coding tricks, and perhaps even drop into a lower level language where beneficial.)

If x itself is a complex expression:

  (if (and (consp (complex-expr y)) (consp (cdr (complex-expr y)))
    (car (cdr (complex-expr y))))
we will likely code that as:

  (let ((x (complex-expr y)))
    ...)
The function call gives us all that for free: (cadr (complex-expr y)). The argument expression is evaluated once, and bound to the formal parameter that the function refers to, and the function body can do manual CSE not to access the cdr twice.


It would be considered "the right thing" to do something that's so common you probably want it without asking. I don't think CADR would check for NIL since it's meant to be equivalent to (car (cdr x)), so if you wanted a safe list operation you would have to check it like this: (I'll use CADADR because it makes the issue more apparent)

  (and (car x)
       (cadr x)
       (cadar x)
       (cadadr x))
You would have to write this every time you want to see if there's a really CADADR, whereas if CAR and CDR can return NIL then you can just write (cadadr x) and CADADR can still be defined as (car (cdr (car (cdr x)))) and have the desired behaviour.


Any argument of the form "you have to write this idiom" is covered by "so either it doesn't happen often, or one can use a macro". There's cognitive overhead to using a macro, but there's also cognitive overhead to remembering car and cdr work on nil. The latter is already paid for, so changing Common Lisp now doesn't make sense, but in an alternate world with a different design it would be a point.


There's more 'cognitive overhead' to making CADADR etc. a macro that expands to the above when CAR and CDR don't work that way, since then its implementation isn't consistent with what it's meant to be. If you made it a macro with a different name then you have two slightly different versions of CADADR and every other accessor, which is even more overhead. Apparently this idiom happened often enough that it was deemed desirable to simply make it the default behaviour. Accommodating common idioms is a pattern of "the right thing" design, from which Lisp is heavily derived, and if you're not a Lisp programmer then keeping this philosophy in mind is a good way to not have misconceptions about the design of the system.

However, thinking in terms of 'cognitive overhead' for a very minor design choice is very silly. I don't suffer any 'cognitive overhead' from having CAR and CDR work on NIL when I write Common Lisp because I'm used to it, but I do suffer 'cognitive overhead' when they don't in Scheme, which is the 'alternate world with a different design'. I am incredulous to the idea that one is actually superior to the other, and suppose that it is simply a matter of preference.


Under one of the two choices, we can reduce the amount of code, if we stick to certain easy representational conventions around how we use nil.


The use of car and cdr are such a surprisingly concrete implementation detail in the birth of a language that was designed to be mathematical. The most basic and famous operators of "List Processor" were created to operate not on lists but on conses, an element in a particular machine representation that Lisp uses to build data structures! Not only are conses not always interpreted as lists, but a very very important list, the base case for recursive functions on lists, is not represented by a cons.

Sixty years later, most Lisp programs are still full of operations on conses. A more accurate name for the language would be "Cons Processor!" It's a reminder that Lisp was born in an era when a language and its implementation had to fit hand in glove. I think that makes the achievement of grounding a computer language in mathematical logic all the more remarkable.


maybe related to the need to conserve CPU registers in machines of the time?

https://en.m.wikipedia.org/wiki/CAR_and_CDR

In any case, ASTute observation

er, ASTute ;)


> Modern Lisps now evaluate (car nil) and (cdr nil) to nil.

Scheme doesn't. Taking the CAR or CDR of nil is an error.


Does Scheme even have NIL in the sense that other Lisps like CL or Elisp have? I mean in Common Lisp, we have:

  CL-USER> (symbolp nil)
  T
  CL-USER> (atom nil)
  T
  CL-USER> (listp nil)
  T
Similar results in Emacs Lisp. But in MIT Scheme, we get:

  1 ]=> nil

  ;Unbound variable: nil
Of course, we can use () or (define nil ()) to illustrate your point. For example:

  1 ]=> (car ())

  ;The object (), passed as the first argument to car, is not the correct type.
But when I said NIL earlier, I really meant the symbol NIL that evaluates to NIL and is both a LIST and ATOM. But otherwise, yes, I understand your point and agree with it.


> Does Scheme even have NIL in the sense that other Lisps like CL or Elisp have?

No. It has an empty list, which is a singleton atomic value whose type is not shared with any other object, and it has a boolean false value, which is distinct from the empty list. A user can create a symbol named NIL, but that symbol has no characteristics that distinguish it from any other symbol. You can, of course, bind NIL to either the empty list or boolean false (or any other value) but it can only have one value at a time (per thread).


> No. It has an empty list

If there is no nil in scheme, what did you mean when you said that taking car or dr of nil is an error in scheme?


You need to read more carefully. The claim was not that there is no NIL in Scheme, the claim was that Scheme does not have a "NIL in the sense that other Lisps like CL or Elisp have". There is a NIL is Scheme, but it's just a symbol like any other with no privileged status. Also, in colloquial use, the word "nil" is often taken to be a synonym for "the empty list" even when talking about Scheme.


I don't believe so, standardly. Guile scheme added the value `#nil' which is equivalent to NIL and distinct from #f and the empty list, but this was done in order to support Emacs Lisp.


I'm not a LISPer but this just seems more correct to me, since stricter is usually more correct.

Ruby (not a lisp but bear with me) started to do this more correctly IMHO where a nil would start throwing errors if you tried to do things with it BUT it would still be equivalent to false in boolean checks.


It depends on what you are trying to optimize for. There is a benefit to punning the empty list and boolean false. It lets you shorten (not (null x)) to just x, and that is a common enough idiom that it actually makes a difference in real code. And there is a benefit to being able to say or type "nil" instead of "the empty list" because "nil" is shorter. But yeah, for modern production code, I agree that stricter is better, all else being equal.


I love that I started out saying "I'm not a LISPer" and someone whose username is literally "lisper" responded. >..<

Is there a purely functional Lisp that compiles to machine code yet?


That depends on what you mean by "purely functional Lisp". You can write purely functional code in any Lisp, and you can compile any Lisp to machine language, and this has been true for decades. AFAIK there is no Lisp that enforces purely functional programming, but it's easy to build one if that's what you want.


That is what I want.

Turns out that LFE (Lisp-Flavored Erlang) exists…

Wait, doesn’t Clojure have this, to a large degree at least?


sorry for another lisp question

if from a syntactic-flavor perspective, endless parentheses turn me off, but also cleanly map to significant indentation (where any new open paren is a new indentation level and a close paren maps to a backdent), has anyone tried a Lisp that uses indentation instead of parens?

I'm probably failing to consider edge cases but it seems like a potentially simple tweak that might make lisps more palatable to many

imagine that, a lisp without parens... (empty cons literals... crap, that's 1 edge case!)


> I'm probably failing to consider edge cases but it seems like a potentially simple tweak that might make lisps more palatable to many

Lisp came out in 1960. The s-expression-only syntax was an accident or a discovery - depending on one's view. Over the many years no attempt to add significant indentation syntax without parentheses gained more than a few users. Syntax variants without parentheses (and no significant indentation) only had a marginally better fate. Sometimes it even contributed to the failure of Lisp derived languages (-> Lisp 2, Dylan)...


Alternative syntaxes for Lisp dialects, some of them indentation-sensitive, have been proposed numerous times over the entire history of the Lisp family.

From the start, John MacCarthy believed that Lisp would be programmed using M-expressions and not S-expressions. M-expressions are still quite parenthetical, but have some syntactic sugar for case statements and such.

In the second incarnation of the Lisp project, which was called Lisp 2, MacCarthy's team introduced an Algol-like syntactic layer transpiling to Lisp. This was still in the middle 1960's! The project didn't go anywhere; Lisp 1.5 outlived it, and is the ancestor of most other Lisp stuff.

In the early 1970's, Vaughan Pratt (of "Pratt parser" fame) came up with CGOL: a another alternative programming language syntax layer of Lisp.

Scheme has a "sweet expressions" SRFI 110 which I think was originated by David Wheeler. It is indentation-based syntax.

The Racket language has numerous language front ends, which are indicated/requested in the source file with #lang. I think one of them is sweet expressions or something like it.

Those are just some of the notable things, not counting lesser known individual projects.


What do you think is more likely, that you are the first person to ever think of this, or that others have tried to do this and failed for some reason? The more interesting question is: what is that reason? I'm not going to tell you the answer, you will learn more if you figure it out yourself, but here's a hint: look at Python's syntax, and ask yourself if it is possible to write an editor that auto-indents Python. (Second hint: look at what happens when you edit Python code in Emacs. Third hint: look at what happens when you put in PASS statements.)


I never suggested that I was the first person to think of this; not having dealt with any Lisp since (hmmm) 1990 via Scheme in my introductory CS 212 class at Cornell probably has something to do with my ignorance of the prior art in this area. I do like your approach of breadcrumbing me instead of giving me the answer, though... best I can guess is "tooling" and simply that S-expressions are simply too embedded in the minds of the Lisp community at this (or previous) point(s).

I also don't deal with significant-indentation in languages usually (and have a strong Python distaste); though I've been playing with Roc (https://www.roc-lang.org/), which has this, and have used HAML (https://haml.info/) in the past, where it seemed useful. I suppose auto-indenting is impossible in a significant-indentation language depending on what the editor can intuit based on how the previous line ended, but I don't think I'd need that feature as long as it simply held the current indentation and just let me hit Tab or Backspace. (I could see things becoming a mess if you manage to screw up the indentation, though.)

I did research "sweet expressions" (which are apparently also called T-expressions) and found the prior art there in Scheme and Lisp, and a library called "sweet" for Racket (which is another intriguing lisp dialect!). These might have gotchas, but apparently they've sufficiently solved the problem enough to be usable.

I do simply like how "T-expressions" look. Which is something I guess I care about, although I know that's not a universal among coders. (My guess is that those who care about such things are simply not 100% left-brained about their coding and are invested in the "writing" aspect of the craft.)


Elisp and CL do.


Sadly this is not the case with Scheme and it makes for very unergonomic code, especially for a newbie like me.

Which is a shame, because I prefer (Guile) Scheme to Common Lisp.


I'm very tied to Common Lisp, but I'm perfectly fine with the idea of a lisp in which car and cdr would be undefined on nil. Also, I'd be fine with a lisp in which () is not a symbol. I don't think these features of Common Lisp are essential or all that valuable.


They are not essential, but they make code that operates in lisp more compact and pleasant to write.

In Scheme my code is littered with

  (if (null? lst)
      ;; handle empty case here
      ...)
Simply because otherwise car throws an error. This whole section is often unnecessary in CL.


But you need to handle the empty case anyway otherwise you process nils ad infinitum.


You can say

  (if lst
    ...)
if the empty list is falsy, but Scheme eventually chose to add #t and #f. Oddly #f is the only false value but #t is not the only true value.


I do prefer nil being the false value as well as the empty list, even if it makes it more awkward to distinguish between 'there is a result, but the result is an empty list' and 'there are no results'. But that has nothing to do with car and cdr in Common Lisp treating nil as though it were `(cons nil nil)'. The only value in that I can see is would be if rplaca and rplacd can do some useful things with that (so `(setf (car symbol-that-currently-points-at-nil) foo)' and `(setf (cdr stcpat) bar)' do those useful things).


An oldie: https://ashwinram.org/1986/01/28/a-short-ballad-dedicated-to...

Describes the evolution from:

  (cdr (assq key a-list))
to:

  (let ((val (assq key a-list)))
     (cond ((not (null? val)) (cdr val))
           (else nil)))


Now, which is more pleasant to read (arguably the more important question for all, but the most primitive of applications)?


> Sadly this is not the case with Scheme and it makes for very unergonomic code,

How so? If car of nil returns nil, then how does a caller distinguish between a value of nil and a container/list containing nil?

The only way is they can check to see if it's a cons pair or not? So if you have to check if it's a cons pair then you're doing the same thing as in scheme right?

I may be missing something, but isn't it effectively the same amount of work just potentially? Need to check for nil and need to check if it's a pair?


> how does a caller distinguish between a value of nil and a container/list containing nil

Very easily; but the point is that it's very often easy to design things so that the caller doesn't have to care.

For instance, lookup in an associative list can just be (cdr (assoc key alist)).

If the key is not found, assoc returns nil, and so cdr returns nil.

Right, so when we use this shortcut, we have an ambiguity: does the list actually have that key, but associated with the value nil? Or does it not have the key.

Believe it or not, we can design the data representation very easily such that we don't care about the difference between these two cases; we just say we don't have nil as a value; a key with a value nil is as good as a missing key.

This situation is very often acceptable. Because, in fact, data structures are very often heavily restrained in what data types they contain. Whenever we assert that, say, a dictionary has values that are, say, strings, there we have it: values may not be nil because nil is not a string. And so the ambiguity is gone.

A nice situation occurs when keys are associated with lists of values. A key may exist, but be associated with an empty list (which is nil!). Or it may not exist. We can set things up so that we don't care about distinguishing these two. If key K doesn't exist then K is not associated with a list of items, which is practically the same as being associated with an empty list of items. If we split hairs, it isn't, but in a practical application things can be arranged so it doesn't matter.


> How so? If car of nil returns nil, then how does a caller distinguish between a value of nil and a container/list containing nil?

How about this?

  CL-USER> (null nil)
  T
  CL-USER> (null '(nil))
  NIL
  CL-USER>


I think that's my point. You still need a separate call to distinguish the nil rom the list of nil case.

At that point, if you're making the two calls how is LISP's behavior any more ergonomic than Scheme. I'm not saying it's not possible, I just don't see it.

Can you show code between the two and how one is much worse than the other?


We can declare that our code only works with lists of numbers, or lists of strings. Therefore, nil is not expected. If (car list) returns nil, it can only be that the list is empty, because if it were not empty, it would return a number, or string, or widget or whatever the list holds.

Even when we have a heterogeneous list in Lisp, like one that can have symbols, numbers, strings or widgets, we can almost always exclude nil as a matter of design, and thus cheerfully use the simpler code.

We cannot exclude nil when a list contains Boolean values, because nil is our false.

We also cannot exclude it when it contains lists, because nil is our empty list.

The beauty is that in many situations, we can arrange not to have to care about the distinction between "item is missing" and "item is false" and "item is an empty list", and then we can write terser code.

When you see such terse code from another programmer, you know instinctively what the deal is with how they are treating nil before even looking at any documentation or test cases.


cons is an adt and fundamental building block used to build lists (which is a builtin datatype) it's also used to build other data types. the property we're discussing is useful when you're operating on those other data types, rather than lists. when you're designing those other data types you have to be aware that null can be both the absence of value and a value, so you design those other data types appropriately. the property we're discussing becomes useful and handy when you don't care about that distinction, which is quite often in practice.

for example a useful datatype is an association list. (setq x ((a . 1) (b . 2) (c . nil))) you can query it by calling (assoc 'a x) which is going to give you back a cons cell (a . 1) in this case. now the presence or absence of this cell indicates the association. if you want to know explicitly that C is nil, then you have an option to, and it's similar in function call counts to Scheme. if you don't care though about the distinction you can do (cdr (assoc 'a x)) which is going to give you 1. doing (cdr (assoc 'foo x)) will give you nil without erroring out. it's a pretty common pattern.

in case of established data types like association list, you will probably have a library of useful functions already defined, like you can write your own getassoc function that hides the above. you can also return multiple values from getassoc the same way as gethash does the first value being the value, and the second value being whether or not there's a corresponding cons cell.

but when you define your own adhoc cons cell based structures, you don't have the benefit of predefined functions. so let's say you have an association list of symbols to cons cells (setq x ((a . (foo . 1)) (b . (bar . 2)) (c . nil))). if I want to get foo out of that list, I'll say (cadr (assoc x 'a)) which will return foo. doing (cadr (assoc x 'c)) or (cadr (assoc x 'missing)) will both return nil. these later manipulations require extensive scaffolding in Scheme.


is there a term to describe the language design choice (reminds me of SQL, btw, where it is equally bad IMHO) where doing things to nil just returns nil without erroring? I want to call it "bleeding nils/NULLs" if there isn't another term yet.

As stated, I think this design choice is terrible, especially if nil isn't equivalent to false in boolean comparisons (as it is in Ruby and Elixir- with Elixir actually providing two types of boolean operators with slightly different but significant behavior; "and" will only take pure booleans while "&&" will equate nil with false). It might mean cleaner-written code upfront but it's going to result in massively-harder-to-debug code because the actual error (a mishandled nil result) might only create a visible problem many stack levels away in some completely different part of the code.


> where doing things to nil just returns nil without erroring

Just call it a Result::Failure monad, say you meant to do that, and confuse legions of programmers for decades.


LOL. I mean, that's in essence what it's trying to do, right?


There really should be two different kinds of cons cells, one for "proper" linked lists and another for general purpose consing. The difference is that the cdr of the first kind of cons cell (I'll call it a PL-cons) can only be NIL or another PL-cons, not anything else. This would eliminate vast categories of bugs. It would also make the predicate for determining is something was a proper list run in constant time rather than O(n). (There would still be edge cases with circular lists, but those are much less common than non-proper lists.)


I certainly know the Lisp information in this article already, but it's still a fun read. Hofstadter just has a charming way with words.

I found this bit extra amusing:

>It would be nice as well as useful if we could create an inverse operation to readers-digest-condensed-version called rejoyce that, given any two words, would create a novel beginning and ending with them, respectively - and such that James Joyce would have written it (had he thought of it). Thus execution of the Lisp statement (rejoyce 'Stately 'Yes) would result in the Lisp genie generating from scratch the entire novel Ulysses. Writing this function is left as an exercise for the reader.

It took a while, but we got there. I don't think 2024's AI is quite what he had in mind in 1983, but you have to admit that reproducing text given a little seeding is a task that quite suits the AI of today.


I do think LISP remains the major language that can encompass the strange loop idea he explored in his work. I know LISP is not the only homoiconic language, but it is the biggest that people know how to use where the "eval" function doesn't take in a string that has to be parsed.

I hate that people are convinced LISP == functional programming, writ large. Not that I dislike functional programming, but the symbolic nature of it is far more interesting to me. And it amuses me to no end that I can easily make a section of code that is driven by (go tag) sections, such that I can get GOTO programming in it very easily.


Another (properly functional) homoiconic language that enjoyed mainstream adoption briefly in '00s is XSLT. Its metaprogramming features were rather widely used, that is, producing an XSLT from XSLT and maybe some more XML, instead of hand-coding something repetitive, was rather normal.

The syntax was a bigger problem than Lisp's syntax, though.

It's not easy to produce a language with a syntax that's good as daily use syntax, but is also not unwieldy as an AST. Lisp is one of the few relatively successful examples.


I don't know how many other languages use it but I've long admired Elixir's approach to giving devs access to the AST using its basic types in order to write macros:

https://hexdocs.pm/elixir/macros.html

It is certainly possible to implement this sort of thing in other languages, I think, depending on the compilation or preprocessing setup


Possible doesnt mean "requires same ammount of effort"


That's fair. I think it's a big win, though. Macros, when the situation calls for it, are amazing. For example, I believe most of the UTF8 handling code in Elixir was done via macros which brought down the amount of code that had to be maintained by quite a bit.


Thanks for this little flashback to when I had to write XSLT for apache cocoon as my student job


> The syntax was a bigger problem than Lisp's syntax, though.

Yeah. XML and S expressions are pretty close to functionally equivalent. But once you've seen S expressions, XML is disgustingly clumsy.


They have a different model -- one is better for documents, and one is better for programs/data

XML and HTML are attributed text, while S-expressions are more like a homogeneous tree

If you have more text than metadata, then they are more natural than S-expressions

e.g. The closing </p> may seem redundant, until you have big paragraphs of free form text, which you generally don't in programs


SGML was intended for sparse markup in mostly plaintext files. From it grew HTML that is markup-heavy, and XML which is often 100% markup. What made sense for rare markup nodes became... suboptimal when applied in a very different role.


1. GML => SGML => XML

2. rm *

3. JSON

4. rm -rf /


"Any data can be turned into Big Data by encoding it in XML."


Wow.

Also:

XML: eXtremely Murky Language

or Mindblowing


For a while that is how I made my Website dynamic, by writing everything in XML and linking XSLT stylesheets, however the future ended up not being XHTML, and eventually I rewrote those stylesheets in PHP.

Doesn't win any price, or content worth of "I rewrote X in Y" blogpost, but does the job.


Not to mention specifically with Scheme and continuation-oriented programming, the line between functional and non-functional programming becomes so blurry as to become nearly meaningless.


The definition of functional programming is itself quite blurry, says Chris Lattner (of Swift, LLVM, Mojo), in this talk I posted here recently:

https://news.ycombinator.com/item?id=41822811


Even the definition of a lisp is blurry when we zoom in to find the seperation boundry


Lambda: the ultimate GOTO


I love and relate to any impassioned plea on SWE esoterica, so this seems like as good of a place as any to ask: What, in practice, is this deep level of "homoiconic" or "symbolic" support used for that Python's functools (https://docs.python.org/3/library/functools.html) doesn't do well? As someone building a completely LISPless symbolic AGI (sacrilege, I know), I've always struggled with this and would love any pointers the experts here have. Is it something to do with Monads? I never did understand Monads...

To make this comment more actionable, my understanding of Python's homoiconic functionality comes down to these methods, more-or-less:

1. Functions that apply other functions to iterables, e.g. filter(), map(), and reduce(). AKA the bread-n-butter of modern day JavaScript.

2. Functions that wrap a group of functions and routes calls accordingly, e.g. @singledispatch.

3. Functions that provide more general control flow or performance conveniences for other functions, e.g. @cache and and partial().

3. Functions that arbitrarily wrap other functions, namely wraps().

Certainly not every language has all these defined in a standard library, but none of them seem that challenging to implement by hand when necessary -- in other words, they basically come down to conviences for calling functions in weird ways. Certainly none of these live up to the glorious descriptions of homoiconic languages in essays like this one, where "self-introspection" is treated as a first class concern.

What would a programmer in 2024 get from LISP that isn't implemented above?


I'm basically a shill for my one decent blog post from a while back. :D

https://taeric.github.io/CodeAsData.html

The key for me really is in the signature for "eval." In python, as an example, eval takes in a string. So, to work with the expression, it has to fully parse it with all of the danger that takes in. For lisp, eval takes in a form. Still dangerous to evaluate random code, mind. But you can walk the code without evaluating it.


HackerNews sadly never fails to disappoint. Thanks for taking the time to share, that was exactly what I was looking for! Would endorse this link for any lurkers.

The LISP (elisp?) syntax itself gives me a headache to parse so I think I'll stay away for now, but I'll definitely be thinking about how to build similar functionality into my high level application code -- self modification is naturally a big part of any decent AGI project. At the risk of speaking the obvious, the last sentence was what drove it home for me:

    It is not just some opaque string that gets to enjoy all of the benefits of your language. It is a first class list of elements that you can inspect and have fun with. 
I'm already working with LLM-centric "grammars" representing sets of standpoint-specific functions ("pipelines"), but so far I've only been thinking about how to construct, modify, and employ them. Intelligently composing them feels like quite an interesting rabbit hole... Especially since they mostly consist of prose in minimally-symbolic wrappers, which are probably a lot easier for an engineer to mentally model--human or otherwise. Reminds me of the words of wonderful diehard LISP-a-holic Marvin Minsky:

  The future work of mind design will not be much like what we do today. ...what we know as programming will change its character entirely-to an activity that I envision to be more like sculpturing.
  To program today, we must describe things very carefully because nowhere is there any margin for error. But once we have modules that know how to learn, we won’t have to specify nearly so much-and we’ll program on a grander scale, relying on learning to fill in details.
In other words: What if the problem with Lisp this whole time really was the parentheses? ;)

source is Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy: https://onlinelibrary.wiley.com/doi/full/10.1609/aimag.v12i2...


Glad you liked the post. I didn't do any effort to make the elisp readable, so please don't let that fully put you off the topic! :D

I keep meaning to expand on the idea. I keep not doing so. I have higher hopes that I can get back to the rubik's cube code. Even there, I have a hard time getting going.


> In other words: What if the problem with Lisp this whole time really was the parentheses? ;)

I am yet to find a syntax style more ergonomic than s-expressions. Once you appreciate the power of structural code editing your view of s-expressions is likely to change


> HackerNews sadly never fails to disappoint.

FYI, that means the opposite of how you used it.

"Never fails to disappoint" is an idiom that means a person or thing consistently disappoints.


I was grappling -- and failing -- to parse that comment. Thank you for explaining.

To unpack the explanation, because I was wondering how the very negative statement could be misinterpreted:

"Never" is a negative; "fails" is a negative; in English, two negatives cancel out.

"Never fails to disappoint" means "always disappoints".


But they said "sadly". They said "HackerNews sadly never fails to disappoint".

If they really meant the opposite that "HackerNews never disappoints" why would that be "sad"?

But since they said "sadly" I think they really meant what they wrote which is that HN consistently disappoints them. Maybe they meant that the Lisp articles on HN do not go into describing exactly how "code as data" works in actual practical matters and that is consistently disappointing? And they are finally happy when someone explained "code as data" to them in the comments section?


The syntax of Lisp is made up of the same fundamental data types as you use when writing Lisp programs. `(+ 1 2 3)` is both a Lisp expression that evaluates to 6 and also a list containing four items, the symbol `+` and the numbers 1, 2, and 3.

In general, we can say that the Lisp language is very good at manipulating the same data types that the syntax of Lisp programs is made from. This makes it very easy to write Lisp programs that swallow up Lisp programs as raw syntax, analyze Lisp programs syntactically, and/or spit out new Lisp programs as raw syntax.


Some of it is because many people's only contact with Lisp is via academia, and the academics who teach it actually don't care about developing anything with Lisp. They use it as a vehicle for concepts, and those concepts typically revolve around functional recursion.

The Scheme language and it surrounding culture are also culprits. Though Scheme isn't functional, it emphasizes pure programming more than its Lisp family predecessors. The basic language provides tail recursive constructs instead of iterative ones, and demands implementations to implement optimized tail calls.


>Emacs defalias

On Common Lisp too, by defining defalias as a macro:

https://stackoverflow.com/questions/24252539/defining-aliase...


I loved Hofstadter's writing on Lisp in Metamagical Themas and adapted the code in the last article of the series to Clojure for a study group at work, written up here[1].

[1] http://johnj.com/posts/oodles/

edit: clarification


Nice, I wonder if there was a translation on a modern Lisp.



> As you might expect, the value of the atom 1729 is the integer 1729, and this is permanent. (I am distinguishing here between the atom whose print name or pname is the four-digit string 1729, and the eternal Platonic essence that happens to be the sum of two cubes in two different ways - i.e., the number 1729.)

He is? What is the distinguishment he is making?

This writing styling is....interesting.


The use of 1729 would be known to people who know about Ramanujan: https://en.wikipedia.org/wiki/1729_(number)


Hey, that's the code to my safe!


An atom is something defined in the semantics of lisp and a part of the program, it will be represented as bits in the computer memory and as pixels on the screen. A number is a very general concept with many representations, on of which is as a lisp atom, and another could be a pile of 1729 kiwis. The kiwis and the code both represent the number, but they don't represent each other.


The lisp atom 1729 is like a "constant" in a programming language, representing a particular arrangement of bits in lisp systems. The integer 1729 is a number that, in a mathematical sense, has always existed and will always exist regardless of computer systems.

While some atoms can be assigned values, the atom 1729 cannot be assigned any value other than the number 1729.


> In a testament to the timelessness of Lisp, you can still run all the examples below in emacs if you install these aliases:

> (defalias 'plus #'+)

> (defalias 'quotient #'/)

> (defalias 'times #'*)*

> (defalias 'difference #'-)*

Looks like we also need a defmacro for def that is used much further in the article:

> > (def rac (lambda (lyst) (car (reverse lyst))))

I mean the above example fails in Emacs:

  ELISP> (def rac (lambda (lyst) (car (reverse lyst))))
  *** Eval error ***  Symbol’s function definition is void: def
If we want the above example to work, we need to define def like this:

  ELISP> (defmacro def (name lambda-def) `(defalias ',name ,lambda-def))
  def
Now the previous example, as presented in the article, works fine:

  ELISP> (def rac (lambda (lyst) (car (reverse lyst))))
  rac
  ELISP> (rac '(your brains)) 
  brains


> Every computer language has arbitrary features, and most languages are in fact overloaded with them. A few, however, such as Lisp and Algol, are built around a kernel that seems as natural as a branch of mathematics.

Algol? The kernel of Algol seems as natural as a branch of mathematics? Can anyone who has used Algol give their opinion of this statement?


From what I’ve studied, Algol wasn’t designed for typical software development—its main purpose was to give computer scientists a way to describe algorithms with a level of rigor that mirrors mathematical notation.


I did some Algol programming back in the late 80s - when it had mostly been obsoleted by Pascal, Modula, and even C for what we called "structured programming" back then.

I remember it as a likeable, economical, expressive language, without significant warts, and which had clearly been influential by being ahead of its time.

So my guess is that Hofstadter was just referring to its practical elegance - rather than the more theoretical elegance of Lisp.


Out of curiosity: which dialect on Algol, and on what platform?


I'm not sure, but possibly Algol 68. It was on an IBM mainframe running VM/CMS - possibly a 3090.

Long time ago...


Hard to say without knowing which version of Algol he is referring to. Algol 68 was very different from Algol 58.

Algol 60 was the first language with lexical scope, while Algol 68 was a kitchen-sink language that (positively) influenced Python and (negatively) influenced Pascal.


It was discovered that the procedure mechanism of Algol 60 was effectively equivalent to the lambda calulus. This insight was written out in a famous paper by Peter Landin, "Correspondence between ALGOL 60 and Church's Lambda-notation: part I"

https://dl.acm.org/doi/10.1145/363744.363749


C is basically Algol with curly braces and pointers. The sentiment expressed there is probably equally applicable to C, or maybe Pascal. Those are often held up today as a minimal example in contrast to Lisp. There is a sort of sparse, warty elegance to the family. Blocks, arrays, if/then, assignment, while loops. What more could you need?


I've used both C and Pascal. The simplicity of C comes through to me (less so Pascal - the verbosity gets in the way). I never thought of it as "as natural as a branch of mathematics", though.

I mean... I guess you could think of it as having its own set of self-consistent axioms, and from them you can build things. It's a lot larger set of axioms than most branches of mathematics, though.

I guess, if Hofstadter meant the same level of naturalness, well, yes, C did feel pretty natural to me, so... maybe?


I read that article when it came out, as my parents subscribed to Scientific American. Even though I had learned BASIC and Pascal, the concepts in the article were just way over my head. Also, I had no access (that I was aware of at least) to a machine where I could try my hand at Lisp programming. Alas, I wish I had taken it more seriously.

At least Hofstadter was successful at getting me interested in math beyond high school.


I took M490 “Problem Seminar” (a math class) in 1995 with Dr. Hofstadter - we studied triangles, and the definition of a triangle’s center.

You would think that there is a limited set of “triangle centers” but he showed us (and he had us discover and draw them out using The Geometer's Sketchpad) dozens of ways to find triangle centers and he had notes on hundreds more definitions of triangle centers.

His approach to teaching was fun and made us want to take on challenging problems. :)


Me too. I admire the theory of Lisp, but man, all the Lisp folks going "but don't you get it, the absence of syntax IS the syntax!" don't half get tiring.

For some of us, we can just about handle the simple algebraic infix stuff, and we'll never make that leap to "my god, it's full of CARs".

https://xkcd.com/224/


If you have a look on some Emacs code (and modules such as Mastodon.el), you'll see than the syntax is not that scary, as Lisp makes it trivial to modularize code into smaller functions.


I have spent years writing about and studying Lisp, including buying several books.

This is categorically not the case.

Let me paraphrase my own post from Lobsters a year or two back:

I hypothesise that, genuinely, a large fraction of humanity simply lacks the mental flexibility to adapt to prefix or postfix notation.

Algebraic notation is, among ordinary people, almost a metonym for “complicated and hard to understand”. I suspect that most numerate people could not explain BODMAS precedence and don’t understand what subexpressions in brackets mean.

I have personally taught people to program who did not and could not understand the conceptual relationship between a fraction and a percentage. This abstraction was too hard for them.

Ordinary line-numbered BASIC is, I suspect, somewhere around the upper bound of cognitive complexity for billions of humans.

One reason for the success of languages with C syntax is that it’s the tersest form of algebraic notation that many people smart enough to program at all can handle.

Reorder the operators and you’ve just blown the minds of the majority of your target audience. Game over.

I admire Lisp hugely, but I am not a Lisp proponent.

I find it fascinating and the claims about it intrigue me, but to me, personally, I find it almost totally unreadable.

Those people I am talking about? I say this became I am one.

I myself am very firmly in the camp of those for whom simple algebraic infix notation is all I can follow. Personally, my favourite programming language is still BASIC.


Many more people use Excel formulas than any programming language. And I don't believe it's solely thanks to easy notation. Rather something how the data is laid out? Idk I would like for CS to reflect on this more instead of internecine wars on prgramming syntax.


So very much this.

I think programming languages are also long overdue for some controlled trials. They can't be blinded: any experimental subject bright enough to learn to program is probably going to know what language they are programming in.

But trials comparing the effectiveness of different languages. How long they take to attain a specified level of proficiency, how long to learn enough to produce working code, and importantly, readability: for instance, how long it takes to find intentionally-planted bugs in existing, unfamiliar code.

NeXT did this, way back in the 1980s, and Sun lost badly:

https://www.youtube.com/watch?v=UGhfB-NICzg

There is a writeup of some of it here:

http://www.kevra.org/TheBestOfNext/BooksArticlesWhitePapers/...

But speaking as a non-video-liker, this 17min one is worth it.


> I have spent years writing about and studying Lisp, including buying several books.

The key is "writing and maintaining" Lisp software.

Lisp often won't get learned by reading or writing ABOUT it, but by reading AND writing actual Lisp code.

It's a bit like riding a bike. You can study bikes for a long time, but you will typically not be able to ride a bike. That's something which can be learned when actually practicing to ride the bike. This means also not needing to consciously think about it, but by moving tasks to internal automatisms. Lisp code is data and this wants to be "manipulated". This manipulation is a key to learn Lisp. The other key element is to work with a system which gives live feedback -> interactive programming. "Interactive" means to do things, to fail, to improve, to do it again.

It's in part the experience of actually using an interactive programming system.


I agree, even after only working through examples and small learning projects in "Common Lisp: A Gentle Introduction to Symbolic Computation" writing is quite easy. Maybe the secret is writing lisp instead of writing about and studying it. The same approach also works with every other programming language, as a bonus :)


> The key is "writing and maintaining" Lisp software.

You may be right there, but I think there is a point you are smoothing over and almost trying to hide here.

What if someone can't get to the point where they are able to write useful code?

If you can't start riding a bike without stabilisers or someone holding it, then you're never going to learn to ride well.

At around age 11 or 12 I tried to learn to roller-skate. My parents bought me skates, and I put them on and tried to stand up.

I fell over so much over a few days that I bruised my pelvis and walking became very painful, let alone lying down. It was horrible and I gave up.

25 years later I managed to learn to ride a snowboard, after years of failure, because of having to do an emergency turn to avoid hitting some children and getting up on one edge and learning that edge-riding is the key. Nobody told me, including 3 paid days of lessons with a professional teacher.

It took great persistence and physical pain but I did it. I gave up on skating of any kind.

My core point is that people vary widely in abilities. Some people pick up a complex motor skill in 10-15min and can do it and their skills grow from there. Others may struggle for days or weeks to attain that... And most are not doggedly determined enough to try for that long.

Algebra is most schoolchildren's way of describing "something extremely hard to learn and pointless in everyday life." For ordinary humans, the concepts of "variables" and "symbols" that manipulate them IS A WAY TO TALK ABOUT something super-difficult.

But most of it, with effort, can just about get through. Very very few choose to follow it further.

And yet, there are a few families of programming languages -- Lisp, Forth, Postscript, HP calculator RPN -- whose basic guiding assumption is "you will easily master this basic action, so let's throw it away and move on to the logic underneath".

And the people who like this family of languages are annoyed and offended that other languages that do not require this are hundreds of times more popular and are used by millions of people.

Worse still, when someone comes and says "hey, maybe we can simplify that bit for ordinary folks", they mock and deride the efforts.

Maybe just allow yourself to think: perhaps this stuff that's easy for me is hard for others, and I should not blame them for them finding it hard?


> What if someone can't get to the point where they are able to write useful code?

Then I still won't believe it's a Lisp syntax problem, unless they have a background of success with other languages.

Some people don't have a knack for programming. Among them, some try this and that, and falter in various ways.

> And the people who like this family of languages are annoyed and offended that other languages that do not require this are hundreds of times more popular and are used by millions of people.

Those other languages are harder because of their syntax.

Languages that remove syntax are an affront to the self-image that people have built up over their mastery of convoluted syntax.

For most ordinary people, learning programming is equated with learning syntax. When they are memorizing things like that >> has a higher precedence than +, they feel they are getting smarter.

The idea that this stuff is not necessary, and that you actually don't know jack if you don't know semantics, is a huge threat.

Once a beginner is off into a syntax-heavy language, chances are high we have lost them forever, due to simple ego effects.


> If you can't start riding a bike without stabilisers or someone holding it, then you're never going to learn to ride well.

In German: "es ist noch nie ein Meister vom Himmel gefallen". We all start somewhere, we go to school, we have teachers, we have trainers/coaches, we have mentors, ...

I don't think studying it alone will help, best is with people around. Parents and friends will help us to learn how to ride a bike. They will give an example, they will give feedback on our attempts, they will propose what and how to try to master it. After the initial basic hurdle is done, then comes a lot of practice. But again, best by being embedded in a community. Learning such skills is a social activity.

There is a lot of pedagogical material to learn programming with Lisp, Logo, Scheme. I had courses about software development, using languages like PASCAL, LISP, Scheme and others. We got exercises and feedback. We got access to computers, cpu time and an environment for coding. I looked around and setup my own tools and wrote stuff with it. I discussed this stuff (code, environment, architecture, styles, ...) with a friend.

> perhaps this stuff that's easy for me is hard for others, and I should not blame them for them finding it hard?

Lot's of people are frightened by thinking/hearing that it is hard, while in fact it actually isn't.

For example one of reads that German is very difficult for native English speakers. There are a lot of justifications given for that. The actual data says something different. German is very near to English, English even is a Germanic language: https://en.wikipedia.org/wiki/Germanic_languages

The actual ranking: https://effectivelanguagelearning.com/language-guide/languag...

Trying to learn Lisp without actually trying to write code, sounds like trying to learn a language without actually trying to speak with people. Possible, but unnecessary hard.

We need to make our brain adapt to the new language by moving into an environment, where the words connect to the real world and thus to meaning.

Maybe just allow yourself to think: Giving feedback is not "blaming". That's an early concept needed for moving forward.


I think you are wrong.

Let me try to demonstrate with a parallel example.

> "es ist noch nie ein Meister vom Himmel gefallen"

My best guess is: A master does not ready from heaven fall?

One does not instantly become a master?

Different people find different skills easy.

So: ich kann ein bisschen Deutsch spreche. Nicht so viel, und mein Deutsch is nicht gut; es is sehr, sehr schlecht. Aber fur meine Ferien es genug ist.

Ich hat drei Tage Deutsch gestudiert, unt es war in 1989. Drei tage, am ein bus vom Insel Man nach der Rhein.

I am fairly good with languages. I can communicate in 6 foreign languages. Currently, I am studying Czech, because my wife is Czech, and I would like to be able to speak with her family, some of whom speak no English, or German, French, Spanish or anything else I speak at all.

Czech is really hard. It makes German look like an easy beginner's language. In place of German's 4 cases, Czech has 7; in place of German's 3 genders, Czech has 4. (Czechs think there are 3, but really there are 4. Polish has 5.)

I am somewhere past A2 level Czech, beginning B1, and I can hold a simple conversation, mainly in the present tense. But I started at age 45 and it took me about 5 or 6 years of work to get to this level. Basic tourist German I got in about 30 or 40 hours of hard work when I was 20 years old.

I am not bad at languages.

I am terrible at mathematics and very poor at programming. I used to be capable and proficient in BASIC and fairly good in FORTRAN. I managed a simple RLE monochrome image compression and decompression program in C, and an implementation of Conway's Game of Life in Pascal, and that is the height of my achievement.

I am pretty good at getting other people's code working, though. Enough to be paid to do it for decades.

I find Python quite hard -- weird complicated stuff like objects comes in early, and nasty C syntax peeks through even simple stuff like printing numbers.

Lisp, though, switches from just about comprehensible code to line noise very quickly after the level of "Hello world".

I got hold of a copy of SICP. It's famous. It's meant to be really good.

I could not follow page 1 of the actual tutorial.

Perhaps you know it.

In section 1.1.1, it says:

« (+ (* 3 (+ (* 2 4) (+ 3 5))) (+ (- 10 7) 6))

which the interpreter would readily evaluate to be 57. We can help ourselves by writing such an expression in the form

(+ (* 3 (+ (* 2 4) (+ 3 5))) (+ (- 10 7) 6))

following a formatting convention known as pretty-printing, in which each long combination is written so that the operands are aligned vertically. The resulting indentations display clearly the structure of the expression.6 »

The "helpful" pretty-printed part is incomprehensible to me. Section 1.1.1 is about where I gave up.

I think that this kind of issue is not just me.

Again: I submit that a bunch of people good at a very difficult skill are badly over estimating how good ordinary folks would be at it.

Most people can't program. Most people can't do mathematics. Most people are not good at this stuff.

The people that can do maths and can program mostly can only program in simple, infix-notation, imperative languages. Functional languages, or even prefix- and postfix-notation, is a step further than I suspect that 99% of humans can go.

And the attitude of those who can do it to those of us who can't do it is really not pleasant.


> Most people can't program. Most people can't do mathematics. Most people are not good at this stuff.

No doubt about that.

SICP is the wrong book.

SICP is for people who are good at maths. Most of the examples are maths related. That's a well known complaint about the book. Often such maths-heavy introductory courses filter out the students who are not good at maths. On purpose.

SICP is not for beginners learning Lisp programming. SICP was an university introductory course book for computer science. It was developed out of maths-heavy CS lectures. Various other books tried to improve it both to make some of the topics easier to learn or to make it more advanced in programming language technology.

Easier SICP from Brian Harvey

https://www.youtube.com/watch?v=cuTOo_Kj4U0&list=PL91cR71aKp...

or him adopting this stuff to Logo: Computer Science Logo Style. https://people.eecs.berkeley.edu/~bh/

Or his book "Simply Scheme": https://people.eecs.berkeley.edu/~bh/ss-toc2.html

But what you are looking for is a book for a software developer wanting to learn practical Lisp programming with different examples.


Although the SICP is an awful book, the thing in section 1.1.1 that OP refers to where it presents formatting an expression:

  (+ (* 3
        (+ (* 2 4)
           (+ 3 5)))
     (+ (- 10 7)
        6))
is a decent point.

I mean, how could they present this differently? Pretty much any Lisp book should explain this stuff the same way. Look, we have these parentheses and that's what the machine cares about, but we split across and indent like this.

If someone finds that reformatting to be incomprehensible and unreadable, virtually no different from the original one liner, they may have some cognitive issue (a form of dyslexia or something like it). Likely they will struggle with programming in any language.

I don't suspect it's "cognitively typical" not to find the visual structure of the above formatting to be obviously helpful.


I disagree.

I was fine with Pascal, Fortran, minimally competent in C, and happy in half a dozen dialects of BASIC, which remains my preferred language.


I don't know, man. If a big truck is hurtling towards me, I'd prefer that the person behind the wheel not have the cognitive impairment that prevents them from being able to grok the visual structure of:

  (+ (* 3
        (+ (* 2 4)
           (+ 3 5)))
     (+ (- 10 7)
        6))
It's just very basic recognition of shapes suggested by incomplete contours.


Pascal, Fortran, C, ... All these languages use prefix notation for function calls.

Fortran:

    min(size(b), size(a))
Lisp:

    (min (size b) (size a))


I suspect you might have some dyslexia-like cognitive or visual issue that makes it hard to work with programming language syntax.

Given the multi-line layout:

  (+ (* 3
        (+ (* 2 4)
           (+ 3 5)))
     (+ (- 10 7)
        6))
I strongly suspect most ordinary people with neurotypical visual pipelines would find it helpful and more comprehensible over the same expression formatted as one line, regardless of their aptitude for math, or the semantics of programming.

It can't be that only a minority of people have it as a "special skill" to see a simple visual pattern with hierarchical grouping and alignment.


I would say that it is a bang-on certainty that...

> It can't be that only a minority of people have it

Only a minority of people have the ability to understand algebra.

Of them, only a minority can usefully use it and apply it.

Of them, only a minority can formulate an algorithm and construct code to perform it.

Of them, only a minority can tolerate having the helpful algebraic notation removed and replaced with a bare abstract syntax tree decorate with parentheses.

Why do you think most people only understand enough about Lisp to make jokes about all the parens?

Why do you think most people gravitate towards the simplest, shortest, infix-notation language and moved the entire industry on to that?

By coincidence?


Algebra is taught in public schools. The majority of children are able to get through the classes: do the homework and pass the tests. Not sure where you're getting your statistics.

Not everyone likes it, or ends up going into a field that requires math, but that's not the same thing as having no ability to understand it.

Why most people only understand enough about Lisp to make jokes about all the parens is like asking why some people only understand enough about Poland to tell jokes, like four Polaks turning a ladder so a fifth one can change a light bulb.


Basically no programming language is an infix-only-notation language. Most programming languages are using mixed notation: prefix, infix and possibly postfix. In most languages function calls are prefix. For a subset of functions there are infix operators: a + b, instead of +(a,b). For control flow many programming languages use a more elaborate syntax.

The usual mathematical notation is two-dimensional. Fractions are written a / b and

    a
    -
    b
Also think of square roots and all kinds of other mathematical notation.

When I verbalize a * (b + c) as a natural language expression then it is:

    Multiply a with the sum of b and c.
The operators are prefix.

In German:

    Multipliziere a mit der Summe aus b und c.
in Lisp:

    (* a (+ b c)) 
or

    (* a
       (+ b c))
or

    (* a
       (+ b
          c))
or

    (* a (+ b
            c))
or

         (*

            a

    (+ 

             b
        c)) 
The latter, more random layout, will not be used by Lisp programmers, because it does not follow the usual layout rules. But it is possible, since the layout of the lists is not significant in Lisp.

A pretty printer in Lisp is a function, which takes an expression and writes it in some layout. Thus Lisp can generated 2d-layouts of expressions. Most Lisp-supporting editors can indent or even layout Lisp expressions. This helps Lisp developers using a common layout of code, supported by the tools.


> In place of German's 4 cases, Czech has 7; in place of German's 3 genders, Czech has 4. (Czechs think there are 3, but really there are 4. Polish has 5.)

Polish has 3 genders (masculine, feminine and neuter), just like Czech. And 7 cases, like Czech.


Nope.

In no particular order:

1. żeński

2. nijaki

3. męski męskożywotny

4. męski męskorzeczowy

5. męski męskoosobowy

Contrast with Czech:

1. žensky

2. středny

3. mužsky životny

4. mužsky neživotny

You may not notice them, you may not consider them to be genders, but they look like it, they act like it; they're there and they make life very difficult for foreign learners.


Personal (męskoosobowy), animate (męskożywotny), and inanimate (męskorzeczowy) are not genders. Which is the same situation with Czech (životny - animate, etc).

I’m Polish. And did study Polish on top of that.


:shrug:

If it looks like a duck, walks and swims and quacks like a duck, it's a duck.

I lived in Czechia 10 years and after over half a decade of bloody hard work, I got to beginning B1 level Czech. It has 4 genders and they change adjectives and the accusative declination, and it is not important to me that Czechs don't consider them genders. They're genders. The levels of the hierarchy do not matter, merely the number of nodes.

A comparison: English has no future tense, strictly speaking. But in reality, really, it does: "I will say X". In fact arguably two: "I am going to say X." Technically to a linguist it's not a tense, it's a mode expressed with an auxiliary verb, but that doesn't matter. Acts like a tense. Used like a tense. It's a tense.

Slavic nouns come in arbitrary categories and you need to know which category it's in to conjugate it properly. French and Spanish have 2, German has 3, Czech has 4, Polish has 5. What they are called? Don't care. Not important to me.

I do not know Polish or speak Polish. I am 100% not claiming any authority here.


Let me elaborate slightly.

For one, "męski męskożywotny" is not what it is called, it is just męskożywotny (the gender is already in the word, male male-animate, has a weird ring to it).

But all that means is that the object is of a masculine gender, and is living.

Męskoosobowy (masculine, person) -- małego chłopca (small boy)

Męskozwierzęcy (masculine, animal) -- małego psa (small dog)

Męskorzeczowy (masculine, thing) -- mały dom (small house)

Żeński (feminine) -- małą górę (small hill)

Nijaki (neuter) -- małe zwierzę (small animal)

The three masculine examples are all of the same gender, masculine -- the difference is if they are a person, animal or thing. None of which are genders, a house and a dog are both masculine.

I'm not going to argue about the complexity of Slavic, specially West Slavic languages -- cause they are complicated. :-). But you are absolutely incorrect in saying that we (Czech or Polish) have more than 3 genders. That you don't think it is particularly important is a bit sad, since these are the things that make Slavic such a fun language group.


I'm Slovak here. Although there are three genders, there are certain situations in which certain kinds of nouns undergo changes according to a finer subdivision than the 3 genders. I'm no expert on that. I don't think it necessarily amounts to separate genders. Or does it?

Let me see if I can recall an example. Okay, how about the word for horse, which is kôn, and man which is muž. This is masculine: ten kôn (that[masc] horse), ten muž (that[masc] man).

However, in the third person we have tý muži (those[masc] men) and tie koňe (those[fem? neut?] horses)?

The demonstrative tie is the same like the feminine one, tie ženy (those[fem] women) or neuter tie deti (those children).

Even if that is a special gender difference, it does not fall along the animate versus inanimate line, because horses are clearly animate.

Inanimate objects that are masculine in the singular do fall into this: ten stôl (that[m] table), tie stoly (those[f] tables).

It might be human versus non-human. Collections of non-human male gender things are not themselves males, but neuters.


I find it strange that you are labelling ten/tie/.. as with gender. I don't know Slovak, but I'd expect it is the same as in Polish that the gender is on the subject. E.g., stôl is stół in Polish, and "męskorzeczowy", so masculine. "Ten stół" or "te stóły" -- te or ten is neither feminine nor masculine.


These demonstrative particles themselves don't have gender since they are not nouns, but they have a gender-based variety, and must pair correctly with the nouns by gender.

It's similar to la and le in French. You cannot say "Vive le France"; it has t obe "la France".

They are used as helpers in communicating the gender of a noun. If we say "ten stôl", it reaffirms that the noun is masculine. "tá stôl" is ungrammatical.

Other words are like this. E.g. interrogative wh- words: "ktorý muž?" (which man?) "ktorá žena?" (which woman?)


"I will ..." is a nonpast tense, though.

The semantics is future, but tense is a matter of syntax.

The modal verb which establishes future semantics it not in a future tense; it is in its dictionary form: to will.

In archaic English we can say things like "As I will it, so it shall be" where the verb isn't acting as a modal. The modal will comes from that one, I believe.


The rule of three is algebra, yet every grandma knows it.


What is "the rule of three"? I have never heard of it, and Googling it was not helpful. Conspiracy sites and rules for fiction writers and things.


Dunno. The only programming rule of 3 I know is that, in a C++ class, you provide a destructor, a copy constructor, and an assignment operator. But that doesn't fit either, because it's not really algebra, and every grandma certainly does not know it, and it doesn't fit in a Lisp article anyway.


Cross-multiplying related proportions to find the fourth value:

https://mathworld.wolfram.com/RuleofThree.html


The basic proportion matching formula which is just a linear equation in disguise.

        a/b = c/d, we know a, b and c, solve for d, which is cb / a.
As I said, every grandma did that to guess some percentages. Thus, if anyone can grasp that, he/she is ready for Algebra.


Never heard of this before, and no, I do not think my own mother could do this.

Again: I think you badly underestimate how hard this stuff is.


Ever?

A book costs $20.

How many books could you get with $200?

That's it. Even my granma living in a remote village understood it.


The person you are replying to clearly stated that every grandma knows this, and yet you seem to have given up without even trying to ask a grandma.


Well, this is true. I am 56, and my last living grandmother died over 30 years ago.

I became a father 5 years ago, so strictly, my mother is a grandmother. I could ask her, but I am very confident she does not know.

I have a science degree, and I just barely scraped through a statistics 101 course with great difficulty. I am pretty smart; I speak 6 foreign languages, and I have held down a career in software for approaching 40 years now, by understanding hard stuff and making it work, or documenting it, or explaining it.

But I find algebra hard, just scraped through a mathematics 'O' level in 1986 by taking corrective classes and resitting the 1985 exam that I failed.

I stand by what I said.

I've never heard this rule. Looking at the Wolfram explanation, I could do that, yes. But I've never heard of this, and I am pretty confident my mother could not do this.


I don't think it came across clearly but I was teasing the commenter you were replying to (and not you) for making an absurd claim in a confusing, sexist, and ageist way. Although my own grandparents are also dead, I know quite a few grandmas, including my stepmom (who had a ~5 decade career as a software engineer) - that are brilliant mathematicians and/or coders - so I take exception to the idea of using the word grandma as a stand-in for an ignorant person. Also, every person educated through public school in a wealthy country that isn't mentally disabled knows actual algebra and has studied it for 6+ years, not just the "rule of 3" whatever that is.

However, I do want to say something about listing out your qualifications and experience like you did on here... in the petty power struggles and trolling on the internet it does the exact opposite of what it seems like it should. It's putting the other person in charge of deciding if you are "good enough" to participate or have an opinion, by implicitly making an effort to convince them and asking them to judge you. Your opinion and reasoning carry more weight on their own, without arguing why you should have the right to have them.


Hmmm. Well, if we all misread you, ISTM that you also misread me.

What I was trying to say was: "I am pretty smart, but I can't do this."

Which means: "different people are smart in different ways."

Which means: "what is no problem for Lisp coders can be a pretty big problem for other folks."


Is your reply possibly meant for anthk and not me? I did understand what you wrote in the way you describe here.


No, it was for you, but it applies a little bit to them as well.


> a large fraction of humanity simply lacks the mental flexibility to adapt to prefix or postfix notation.

I doubt it. Firstly, there are entire prefix and postfix natural languages, which have capable native speakers of all intellectual persuations. But in natural languages, sentences do not go to very deep levels of nesting before people get confused.

In programming, we have the deep nesting quite often. Nobody has the mental flexibility to adapt to it. We indent the code instead.

Nobody can read a large Lisp program (or even a small one) if it is flattened into one long line, which is then wrapped to a paragraph.

Within a single line of Lisp, there is rarely much nesting going on where the prefix notation causes a problem. The rest is indentation.

Everyone doing serious programming relies on their editor for that, which helps them spot nesting errors.


> Firstly, there are entire prefix and postfix natural languages, which have capable native speakers of all intellectual persuations.

Sure. And you are also aware that there are natural languages which are regarded as being very hard for non-native adults to learn, right?

Some natural human languages are easier than others. This is axiomatic.

Some programming languages are easier than others too. Excluding the ones that are designed to be, from INTERCAL to Ook!


I'm not aware that major features like verb order cause difficulty in language learning.

It's usually gratuitous syntax like noun cases, especially when the same feature is not present in any shape in one's native language.

Also writing systems that have large numbers of symbols, which have multiple interpretations.

In any case all mainstream programming languages have prefix notation in the form of function calls. And also statement constructs that begin with a command word followed by argument material.

Imperative sentences in English amount of prefix notation because the subject is omitted (it is implicitly "you") so we're left with verb and object.


Sure, but that is not my key point here.

You're focussing on the detail while ignoring the general picture.

I am not comparing Lisp to Mandarin Chinese or something. That would be silly.

What I am saying is that there are a whole bunch of languages (both kinds) which presumably seem perfectly easy to those who grew up with them, but if you didn't and you come to them after learning something else that's much simpler and doesn't do the fancy stuff, then they seem really hard. Consistently, for lots of people, regardless of background.

Doesn't matter how good your Arabic, Mandarin will be hard, and vice versa.

https://www.geeksforgeeks.org/hardest-languages-in-the-world...

That list isn't sorted by your source language, your L1. That doesn't matter.

If they come from an infix-notation imperative language then most people are going to find moving to an impurely-functional prefix-notation one is really hard. And most languages are infix-notation imperative languages.


Sorry, how does the point that natural languages are absolutely hard or easy, regardless of one's native language, speak to the point that moving to a prefix programming language from infix is hard?

There are hardly any Lisp programmers today who didn't "come from" prefix languages.

All mainstream languages are heavily steeped in prefix notation. The infix stuff is just a small core, typically. Mainly it is for arithmetic. Sometimes for string manipulation or set operations on containers, perhaps I/O.

Libraries are invoked with f(a, ...) prefix or perhaps obj.f(a, ...) mixed infix and prefix.

Libraries have far more content larger than the infix material.

Even small programs are divided into functions, invoked with prefix. Prefix is relied on for major program organization.

Command languages use prefix: copy filea.txt this/dir.

Statement and definition structures in mainstream languages are prefix: class foo {, goto label, return 42, procedure foo(var x : integrer), ...

The idea that programmers coming to Lisp are confused by prefix does not hold water.


The core unifying point here is that some things are harder to learn than other things.

If you find that a difficult idea, I don't know how else I can put it.


> harder to learn than other things

"harder" is relative to a reference. Even "hard" is relative.

The simple nested arithmetic code you stopped at in SICP is easy for most people working as developers. Maths from 5th year school are sufficient. Nested lists, prefix notation isn't "hard" for "most" people.

It's just lightly harder for someone who has never seen that. When they have seen XML or JSON, then it's even easier.

Actually "hard" Lisp code looks and feels different. First "harder" hurdles in Lisp are typically evaluation, recursion, macro expansion, compile-time vs. runtime, meta-notation (code as data), ...

I learned 2 years PASCAL and MODULA 2 on an Apple ][ (and i still think that was fun), before getting in contact with the Lisp alien world. I was hooked in a short time.


I really feel like you are determined to misinterpret and deny what I am trying to say, without engaging and listening.

This is a very Lispian sort of attitude. "No, you are wrong, this is in fact the correct way..."

There is an existence proof that the attitude of "just keep at it and it will make sense" is not true, and it's an international software industry of buggy, leaky, insecure C code that keeps hundreds of thousands of people in work and makes _tens of millions of dollars_ every year.

If the alternative really was better, someone would have found a way to make money from it, but they have not.

And yet, despite the terrible macho-man culture of the software industry, where it is a sign of how elite you are to write in a non-bounds-checked language with manual memory management and they make jokes about "blowing your own leg off" and "write-only code", Borland made hundreds of millions selling the same people Pascal for decades, and it's still on sale now.

https://www.embarcadero.com/products/delphi

My first full time job in FOSS was documenting Java tooling, for a Linux company that told me in my New Hire Orientation how much the company hated Java.

I have met and talked with some Java developers. They are scary. Some of them are the kids who couldn't get onto Computing courses but they are pro Java developers. They can just bodge it together and it works, safely.

IMHO the Lisp industry and community needs to listen to the people trying to make easier, clearer Lisps, such as CGOL and PLOT and Lunar and Dylan, and engage with them, and try to understand why and embrace it, not just continually telling them they are wrong wrong wrong.

Plain old algebraic notation works and non-specialists can, with effort, master it.

But even more than that: the most widely-used programming language in the world today is Microsoft Excel formulæ.

BTW: Recursion is trivial. I had recursive BBC BASIC V code working beautifully when I was 20.


> I really feel like you are determined to misinterpret

Why post here, when you are not willing to accept feedback from other people? I get that you have trouble learning Lisp, but I have been hearing the same stuff for decades. Nothing what you tell is new. I had to deal with the same arguments 40 years ago and when you read old Lisp books from the mid 60s -> the same song. I've seen some people getting Lisp and others not. For me the question was always how to help people "to get it" - attitudes like your's, giving up early, is typically an early dead end. The mental block is the bigger problem than the actual thing to learn.

"Anyone could learn Lisp in one day, except that if they already knew Fortran, it would take three days.” — Marvin Minsky.

There is truth to that. People who already have an idea how things should work, need to let that go. Unlearn and start from a neutral position.

> international software industry of buggy, leaky, insecure C code that keeps hundreds of thousands of people in work and makes _tens of millions of dollars_ every year.

Yeah, and I have a lot of respect for that. I use software written in C everyday. It does useful stuff, independent how many problems it has.

> Some of them are the kids who couldn't get onto Computing courses but they are pro Java developers.

I work in a company, which has several hundred people developing in Java. In fact the part of the company I work with is responsible for backend work, which connects several million cars. I've met extremely bright people, doing hard work. I'm not going to try selling them Lisp.

> IMHO the Lisp industry and community needs to listen to the people trying to make easier

IMHO "the Lisp industry" does not exist as an entity and has neither the resources nor the interest to do what. Nobody is interested in CGOL, PLOT or Dylan. These were dead end. Nice experiments. I like experiments.

Lisp people worked on Dylan, when the Lisp jobs went away. Dylan went nowhere. As some former Lisp people working on Dylan, some are still thinking its a good idea and others have given up on it. Harlequin had several language offerings: Postscript RIPs, Lisp, ML, Dylan, Memory management. LispWorks survived, when Harlequin went bankrupt, because it had customers with software. Dylan did not survive. It was eventually open sourced and still has not much going on.

The non-existent "Lisp industry" would alienate its core customers. People who want a different tool.

You THINK that there it would be a way, but we have heard for decades these same ideas. The problem is not people who can give advice (I've seen lots of talks & papers about this topic) - the problem is that these people have nothing to offer, they are themselves not involved. Nobody will listen to you, because people in the IT industry get all kinds of advice, but "needs to listen" does not pay the bills.

Clojure exists, because people developed it, had a market niche and it survived. In the same original niches (enterprise consulting for web technology, ...) were and still are a zillion other offerings. Every year new and old ideas come and go.

> Recursion is trivial

Then SICP should be trivial for you, because its content is based on Scheme with lots of recursion.

> IMHO the Lisp industry and community needs to listen to the people trying to make easier, clearer Lisps, such as CGOL and PLOT and Lunar and Dylan, and engage with them, and try to understand why and embrace it, not just continually telling them they are wrong wrong wrong.

You misinterpret the Lisp community. "People are not wrong". It's just that the people in the various Lisp communities focus on their own stuff and that's their right.

I have my garden at home, I'm not responsible to sell stuff to all kinds of other garden owners. That's not my business. I'm happy with my garden.

Actually, languages like Python, Java, JavaScript, Ruby, Exlixir, OCAML, F#, Wolfram language and many others, ... they have already taken most of the Lisp features (typical argument we hear: why use Lisp, when other languages already have its most important features?).

You may not aware of it, but languages with lots of features from Lisp are in wide use already.


For me, the issue wasn't cognitive, but simply lack of access. The two languages that ran on my Sanyo MBC-550 were BASIC and Turbo Pascal.

Outside of expressions, those languages are essentially prefix in that the operator comes before the list of arguments.


* After this "def wish" has been carried out, the rac function is as well understood by the genie as is car. *

Sometimes I wonder what non-programmers think about us when they hear us talk..


Maclisp goodness:

  (compress (reverse (explode x)))
Elisp much improved:

  (defun explode (x)
    (if (symbolp x) (setq x (symbol-name x)))
    (string-to-list x))
  (defun compress (x) (concat x))


I was wrong: It was "implode" in Maclisp.

  (compress (reverse (explode 'ABC)))
  ;COMPRESS UNDEFINED FUNCTION OBJECT

  (implode (reverse (explode 'ABC)))
  CBA
The point being that I never learn any fancy string-processing commands. I just implement explode and compress.


this is how explode behaves on a lisp machine:

    (defun explode (x)
       (mapcar (lambda (x)
          (intern (char-to-string x)))
        (string-to-list (prin1 x))))
turning character into symbol seems natural, because then you are reducing your needed function space even more. I'm surprised the original operated on prin1 output, not sure what the logic behind that is. on a lisp machine (zl:explode "foo") gives me '(|"| |f| |o| |o| |"|)


(upgrade (mail (change (trash (fix (break (use (buy it))))))))




he he. good one.

but you may have misunderstood what I meant.

I wasn't criticizing you.

it was just a joke related to that scheme book.


Any Shen people in the house?


Admirer, not user. So ambitious and gorgeous. Hosted on Common Lisp with full integration, so useful now. I hope more people check it out. The new Shen book is awesome.


links?


Here's a link to their website with the book: https://shenlanguage.org/TBoS/tbos.html


Interesting article, I enjoyed following along - but I do hate the parentheses lol


What dialect is he using that has “plus” vs “+” and so on?


I find this article to be quaint -- remember reading it decades ago and feeling more receptive to its perspective. Ironically, I prefer using Clojure (though some here challenge its status as a Lisp lol) to interface with Large Language Models rather than Python. Clojure in particular is much better suited, for some reasons that Hofstadter details, and if you can interact with an LLM over a wire, you are not beholden to Python. But what we use to interface to these massive digital minds we are building, including the Bayesian sampling mathematics we use to plumb them, may have their elegance, but they are orthogonal to the nearly ineffable chaos of these deeply interconnected neural networks -- and it is in this chaotic interconnectedness where artificial intelligence is actually engendered.


> Clojure in particular is much better suited

Clojure in general is far better suited for manipulating data than anything else (in my personal experience). It is so lovely to send a request, get some data, and then interactively go through that data - sorting, grouping, dicing, slicing, partitioning, tranforming, etc.

The other way around is also true - for when you need to generate a massive amount of randomized data.


Clojure doesn't have a standard, well-maintained dataframe library - so it is not suitable for any medium to large data science.


I don't do "true" data science, so my voice of "expertise" in the matter is limited. This is the extent of what I've heard. In my opinion, neither of the clauses in your statement are true.

Clojure is very well suited for data science of all shapes and sizes. There's a great meetup lead by Daniel Slutsky where they regularly discuss this topic, and there's #data-science channel in Clojurians Slack where they regularly post interesting findings. As for the libraries, anything used in Java/Javascript can be directly used. Besides, there is TMD, https://github.com/techascent/tech.ml.dataset - it's a well-regarded lib and provides solid functionality for data manipulation.


Interesting. This is new library that certainly wasn't present when I last checked Clojure's offerings - the first commit was in March 2023. It still appears a work in progress to me, but I look forward to checking it to see how it compares to pandas/dplyr.


Like I said - with or without TMD, Clojure does not lack tooling for DF, anything JVM/JS can be used and even Python libs, if you're inclined to do so. You can always ask in Clojurians Slack for guidance, people are incredibly nice and always eager to help.


> when you need to generate a massive amount of randomized data.

Even faster than Clojure: Open VIM for a VS Code user and ask them to exit.


There's no such thing as a "VS Code user", VS Code is the one that uses you, not the other way around.

btw. this isn't some kind of an FP joke, there's no 'fun' in it, only sad truth.


> I hope you enjoyed Hofstadter's idiosyncratic tour of Lisp. You can find more like this re-printed in his book Metamagical Themas.

This seems like an interesting book.


It was one of my favorites back in the 1980s. It was a followup to Gödel Escher Bach, written in much the same style.


> Why is most AI work done in Lisp?

That’s changed, of course, but it remained true for at least another 15 or 20 years after this article was written and then changed rather quickly, perhaps cemented with deep neural networks and GPUs.

Other than running the emacs ecosystem, what else is Lisp being used for commonly these days?


Some purist won't consider Clojure a "true" Lisp, but it's a Lisp dialect.

> what else is Lisp being used for commonly these days?

Anything that runs on Clojure - Cisco has their cybersec platform and tooling running on it; Walmart their receipt system; Apple - their payments (or something, not sure); Nubank's entire business runs on it; CircleCI; Embraer - I know uses Clojure for pipelines, not sure about CL, in general Common Lisp I think still quite used for aircraft design and CAD modeling; Grammarly - use both Common Lisp and Clojure; Many startups use Clojure and Clojurescript.

Fennel - Clojure-like language that compiles to Lua can handle anything Lua-based - people build games, use it to configure their Hammerspoon, AwesomeWM, MPV, Wez terminal and things-alike, even Neovim - it's almost weird how we're circling back - decades of arguing Emacs vs. Vim, and now getting Vim to embrace Lisp.


When I was there, Apple used Clojure for a lot of stuff involving the indexing of iTunes/Apple Music. I used it for some telemetry stuff on top of the indexer as well. Not sure what other teams used it for.


Google Flights was built on CL, no?


I think personally that Coalton and the stuff its built on is crazy cool. Coalton is a little library you add to your Lisp, but, to quote the third link here: "In terms of its type system, Coalton’s closest cousin is Haskell." So Lisp's dynamism with all sorts of advanced typing.

QVM, a Quantum Virtual Machine https://github.com/quil-lang/qvm

Quilc, an "advanced optimizing compiler" for Quil https://github.com/quil-lang/quilc

Coalton, "a statically typed functional programming language built with Common Lisp." https://coalton-lang.github.io/20211010-introducing-coalton/


> Why is most AI work done in Lisp?

Yann LeCun developed Lush, which is a Lisp for neural networks, during the early days of deep architectures. See https://yann.lecun.com/ex/downloads/index.html and https://lush.sourceforge.net. Things moved to Python after a brief period when Lua was also a serious contender. LeCun is not pleased with Python. I can't find his comments now, but he thinks Python is not an ideal solution. Hard to argue with that, as its mostly a thin wrapper over C/C++/FORTRAN that poses an obvious two-language problem.


A friend used lush as his “secret weapon” for a while. I didn’t quite warm to it and now regret not paying attention. It’s amazing how much is packed in “batteries included.”

Apparently it didn’t make the transition to 64-bit machines well? But I haven’t really looked.


It's just as easy to have thin wrappers over C/etc. number crunching libraries in Common Lisp as it is Python. And pure CL code is typically faster than pure Python (though pypy might be a different story). There's no technical reason it still couldn't be dominant in AI.

It's a shame things took the course they did with preferred languages.


My take is that Python won by having a complete ecosystem centralizing many tools that were dispersed in different languages: - Numpy/Scipy/Matplotlib enabled scientists to do data analysis with Panda similar to what was available in R - PySpark enabled big data scripts in Python instead of Scala - PyTorch made Torch available for non-Lua users

Bit by bit, more people got used to doing data analysis and AI research in Python. Some projects were even written for Python first (e.g. Tensorflow or Keras). Eventually, Python had so many high-quality packages that it became the de facto for modern AI.

Is it the _best_ language for AI, though? I doubt. However, it is good enough for most use cases.


Hadn't seen that before, very interesting!


> what [...] is Lisp being used for [...] these days?

I dunno, there's Nyxt, Google Flights, MediKanren, there's some German HPC guys doing stuff with SBCL, Kandria,... I believe there's also a HFT guy using Lisp who's here on HN. LispWorks and Franz are also still trucking, so they prolly have clientele.

There are fewer great big FLOSS Lisp projects than C or Rust, but that doesn't really tell the whole story. Unfortunately proprietary and internal projects are less visible.


Can't speak for the entire industry obviously, but at a few jobs I've had [1] Clojure is used pretty liberally for network-heavy stuff, largely because it's JVM and core.async is pretty handy for handling concurrency.

I know a lot of people classify "Clojure" and "Lisp" in different categories, but I'm not 100% sure why.

[1] Usual disclaimer: It's not hard to find my job history, I don't hide it, but I politely ask that you don't post it here.


> I know a lot of people classify "Clojure" and "Lisp" in different categories, but I'm not 100% sure why

It mostly boils down to Clojure not having CONS cells. I feel like this distinction is arbitrary because the interesting aspect of Lisps is not the fact that linked-lists are the core data-structure (linked-lists mostly suck on modern hardware), but rather that the code itself is a tree of lists that enables the code to be homoiconic.


I mean, you can have a tree of vectors also, so I don't see why lists are needed for homoiconicity.


No, not needed. This argumentation can go both ways; some may even say, "Well, Python is 'Lispy,'" which to me is obviously not. It boils down to what can you do in the REPL, right? https://news.ycombinator.com/item?id=41844611


In my mind Clojure is Lispy, Python is not, nor is Javascript.

In addition to REPL and macros, I think two other Lispy features are essential:

nil is not just the sad path poison value that makes everything explode: lisp is written so that optionals compose well.

Speaking of composing, Lisps tend to be amazing with regard to composability. This is another line that cuts between CL, Scheme and Clojure on one side, with Python and Javascript firmly on the other side in my experience.

Lisps are as dynamic a languages ever go, unapologetically.


I just wanted to add that "dynamic" doesn't mean untyped or weakly typed. Clojure is a strongly-typed dynamicly-typed PL. Clojurescript compiler for example, in many cases can produce safer JS code than even Typescript ever could.


Out of curiosity, can you give an example of where ClojureScript is safer than TypeScript? I'm pretty far removed from the frontend world so this sounds pretty interesting to me.


This is a slightly expanded answer to your question, apologies for the external link, I could of course repeat here verbatim, but let's not increase entropy needlessly, right? https://www.reddit.com/r/Clojure/comments/1dyjwyo/is_it_easi...

Beware though, that there are today more than one flavor of Clojurescript, nbb for instance still acts just like JS in this regard.


This is exactly what I wanted...thanks!

The last time I did ClojureScript in serious capacity was for a school project in 2021, specifically because I wanted to play with re-frame and the people who designed the project made the mistake of saying I could use "whatever language I want".

It makes sense, but I guess I didn't realize that ClojureScript generates some nice runtime wrappers to ensure correctness (or to at least minimize incorrectness).

I guess that means that if you need to do any kind of CPU-intensive stuff, ClojureScript will be a bit slower than TypeScript or JavaScript, right? In your example, you're adding an extra "if" statement to do the type check. Not that it's a good idea to use JS or TypeScript for anything CPU-heavy anyway...


> ClojureScript will be a bit slower than TypeScript or JavaScript, right?

In rare cases, sure, it can add some overhead, and might not be suitable I dunno for game engines, etc., but in most use-cases it's absolutely negligible and brings enormous advantages otherwise.

Besides, there are some types of applications that simply really difficult to build with more "traditional" approach, watch this talk, I promise, it's some jaw-dropping stuff:

SpreadSheesh! talk by Dennis Heihoff https://www.youtube.com/watch?v=nEt06LLQaBY


Having read Let over Lambda, I would say I find Javascript to be (a superset of?) a lispy language. If functional values with lexical binding are supported, then you get all the power of The Little Lisper.

Perhaps the macro facilities are also convenient but that is not the part that makes Lisp mathematical, it's the higher order programming.

And it needn't even be something fancy, just being able to have a data table of tests and have the test functions generated and executed from the table is the power demonstrated.


That's mostly my point. A linked-list structure is not the interesting part. I use the "generic" reading of list above and don't mean to imply some particular implementation


Guix is a Nix-like package manager and distro that is almost entirely written in Guile Scheme: https://guix.gnu.org/

I would guess it's by far the most active Guile project.


Quantum computing and symbolic AI? But also web services, CAD and 3D software, trading, designing programmable chips, big data analytics…

present companies (that we know about): https://github.com/azzamsa/awesome-lisp-companies/


>what else is Lisp being used for commonly these days?

It is being used for formal verification in the semiconductor industry by companies like AMD, Arm, Intel, and IBM: https://www.cs.utexas.edu/~moore/acl2/



The pricing engine for Google Flights (and behind many big airline websites) is written in Lisp.


Some computer science departments (and their MOOCs) use Lisp Dialects "Racket" and "Scheme" as a Teaching Language . For example, IDE DrRacket has an innovative language preselection feature that allows students to start out with a "Beginning Student Language".

https://www.racket-lang.org/


Running hacker news


AutoCAD automation?


Yes. AutoLisp was available from the early days of AutoCAD. I didn't use it much myself. I just helped some mechanical engineers with it in a company where I worked, in small ways, just tinkering, really. At that time I was quite junior, so I didn't really grasp the power of it, so I didn't play around with it much.


Regardless of your opinion on the utility of Lisp, this is an exemplary piece of writing. Crisp, engaging, informative.

God I miss old Scientific American. Today's SA isn't especially terrible, but old SA, like old BYTE, was reliably enlightening.


The title of his column and book "Metamagical Themas" is an anagram of Martin Gardner's previous column "Mathematical Games". It's clever wordplay turtles all the way down.


Other Hofstadter book titles with wordplay:

- Gödel, Escher, Bach: an Eternal Golden Braid (you have GEB/EGB, and I guarantee you he noticed those notes form a musical triad)

- Metamagical Themas (anagram of Mathematical Games)

- Le Ton beau de Marot (I don't have my copy at hand, but "ton beau" is surely a pun on "tombeau" meaning "tomb")

- The Mind's I (editor) (I = eye)

- That Mad Ache (translation of "La chamade" by Francoise Sagan; "mad ache" is an anagram of "chamade")


At least one of the covers of GEB specifically had artwork that shows GEB/EGB : https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach


"tombeau" literally means "tomb", but the term also sometimes means "piece written as a memorial", like Ravel's piano suite "Le Tombeau de Couperin". And yes, Hofstadter explicitly links "ton beau" with "tombeau" (he doesn't explicitly mention the "memorial" meaning, though when he mentions the literal "tombeau de Marot" he is talking specifically about the epitaph on it) and also with "tome beau", the great book of Marot's life and work.

I'd find it a cleverer bit of wordplay if "le ton beau de ..." itself didn't feel clumsy. Surely it would always be "le beau ton de ..."?


This was all somewhere in the back of my head but my copy of this book is in my parents' basement somewhere. I'll have to rescue it so I can keep it in my basement.


The author of GEB is a phenomenal writer, an old-style researcher who knew his greek, and the book for me is more interesting in its commentary on literature, and psychology, approaching themes of say, Foucault.

I don't know about the work's true impact on AI or tech languages, but it's a masterpiece of criticism, analysis and penmanship.


Old school SA was written assuming a basic level of scientific and mathematical background. Many people reading it were professional scientists and engineers who read it to learn about developments in other fields than their own. Current SA seems to be written at a level similar to the science coverage in newspapers, written for the hypothetical "layman" who is supposedly frightened of mathematics and anything technical. I couldn't imagine someone like Martin Gardner or Hofstadter writing in SA today.


Agreed. It saddens me how I feel I completely slept through a golden age of magazines out there. With no real clue how I could help support that coming back.

I was happy with the section in Wireframe magazines that would show how to code some game mechanics every issue. Would love for more stuff like that.


Same with the old National Geographic magazine, before it became slimmer and more ad-heavy, IIRC.


Exactly so. I bought the final issue, because it was the last one, and I read it, and that reminded me why I didn't read National Geographic. Because it's mental chewing gum: an enjoyable flavour, without nutrition; pretty pictures, but I learned little.


yes, but what I meant was that the much earlier issues were very good, with not just good pictures, but lots of interesting textual info as well, about the different geographical topics that they covered, e.g. countries, regions within countries, rivers, forests, peoples, etc.

I remember one particular issue about USA rivers which was really good, with great photos.

damn cool article.

the suwannee river was one that was covered.

https://en.m.wikipedia.org/wiki/Suwannee_River

I looked up that river in Wikipedia for the first time today.

TIL it is a blackwater river. first time I heard the term.

https://en.m.wikipedia.org/wiki/Blackwater_river

the NG issues used to come with very good maps as supplements, too, in color.

also there used to be nice color ads about good cameras, IIRC, like canon, minolta, etc, and cars like the cadillac, lincoln, etc.

gas guzzlers, of course.

a different time.


I remember reading GEB and being shocked that he never mentions Lisp. He _does_ wade into CompSci topics, but it's something half-hearted about how compilers are programs that read and generate programs. This really should've been integrated into a revised edition of the book.


Huh? He mentions Lisp all over the place. Check the index.


Nonsense.

"One of the most important and fascinating of all computer languages is LISP (standing for "List Processing"), which was invented by John McCarthy around the time Algol was invented. Subsequently, LISP has enjoyed great popularity with workers in Artificial Intelligence."


Give it another go! _The Anatomy of LISP_ is the first entry in the bibliography.


Lisp aNeeds Braces


> Lisp needs braces

You're a troll, but I'll feed you. I adapted Peter Norvig's excellent lispy2.py [0] to read json. I call it JLisp [1].

Lispy2 is a scheme implementation, complete with macros that executes on top of python. I made it read json, really just replacing () with []. and defining symbols as {'symbol': 'symbol_name'}. I built it because it's easier to get a webapp to emit JSON then paren lisp. I also knew that building an interpreter on top of lisp meant that I wouldn't back myself into a corner. There is incredible power in the lisp, especially the ability to transform code.

[0] https://norvig.com/lispy2.html

[1] https://github.com/paddymul/buckaroo/blob/main/tests/unit/li... #tests for JLisp


Here, have another approach to Lisp formatting:

https://readable.sourceforge.io/

I looked into porting it to elisp a while back, but the elisp reader was missing a feature or two sweet-expressions require. I should see if that's still true...


Would be nice. But I think after hours/days of working with lisp, the brain starts to see it at sweet expressions. That is why all tries to go away from s-exp don't get traction: anytime anybody starts doing it, pretty fast discovers it is really not needed.


I think this is true for the small percentage of people who get through that initial stage -- but it excludes the (I suspect) majority who just bounce off it.

I just bounced off it, and I have tried quite hard, repeatedly.

Idea: for the rest of us who can't simply flip syntax around in our heads, there should be an infix Lisp that tries to preserve some of the power without the weird syntaxless syntax.

There are of course several, of which maybe the longest-lived is Dylan:

https://en.wikipedia.org/wiki/Dylan_(programming_language)

... but instead of Dylan's Algol- or Pascal-like syntax, do a Dylan 2 with C-style syntax?


In what form?

We have Dylan, Julia, and a couple of other attempts at the matter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: