Hacker News new | past | comments | ask | show | jobs | submit login
Why Lisp Is Unpopular
62 points by mnemonicsloth on April 3, 2008 | hide | past | favorite | 157 comments
A picture is worth 1K - 3 * 2^3 words:

http://www.hq.usace.army.mil/cepa/pubs/jul00/radar.jpg

That's a phased-array radar installation that (I understand) was built to detect Soviet ballistic missiles inbound over the North Pole. Google has more images.

The term "Phased Array" has interesting implications here. It means that received phase information is preserved among the many small antenna elements on each face of the structure. That only works if the paths traversed by the antenna outputs are all the same length (modulo lambda), with a margin of error around .1 * lambda, where lambda is a representative wavelength of the received signal.

This thing uses microwave signals [1], which means we can reformulate the preceding paragraph as follows: You're looking at a picture of a single circuit the size of a large office building, built to a precision of 100 microns. Now's a good time to point out that little yellow thing in the lower left corner of the image is probably some kind of earth-moving machine.

The US military built a number of these installations in Alaska and Canada during the Cold War. Let's pause here to think about all the structure-hardening and weatherproofing that must have been involved.

What does this have to do with Lisp? It's not an exaggeration to say that building and maintaining these early warning systems required the attention of at least a third of all American microwave engineers [2]. When the Cold War ended, most of them left the big contractors the DoD hired to build the radar and started doing basically the same work for cellular companies [3]. Ten or fifteen years later, mobile phones became ubiquitous.

In the case of Lisp, there was no continuity in the transition. Common Lisp, in particular, was primarily in use by a rarefied group of specialists working on room-sized computers at places like DARPA, thinking about things like AI for driving tanks across central Germany. When the funding dried up and the specialists had to move on [4], they found work writing C, Perl, or Java on microcomputers. So Lisp lost its user base and the last of its major hardware platforms (now that you couldn't buy a Lisp Machine any more) all at once.

Of course, when programmers discuss Lisp's continuing lack of popularity, regardless of their opinion of the language itself, they seem a lot more willing to blame things programmers ultimately control, like the social habits of Lisp users or the damn parentheses.

[1] I'm not a microwave guy, so some of this explanation is simplistic almost to the point of inaccuracy. In particular, I don't know what frequency bands these installations use.

This is as good a place as any to point out that I also have no special knowledge about Lisp at big organizations after the end of the Cold War. People who do are strongly encouraged to email me if I've gotten anything wrong.

[2] I don't have a citation for this number, but one of my old professors does. I'll see if I can get him to email it to me.

[3] "3G" cellular technology would be unworkable without big antenna arrays

[4] A lot of the "classic" books on Lisp and related topics were written during this period. SICP, PAIP, and On Lisp, and The Seasoned Schemer come to mind.




How to make Lisp popular (or as popular as Lisp will ever get):

1) SBCL working very well on the big three PC platforms. 2) Bindings + easy installation for popular libraries. 3) Education (pg's articles, everything on planet.lisp.org) 4) Solid debugging, perhaps with things like breakpoints.

There is much activity on all 4 problem items that I identified above, so Lisp IS getting more popular. I don't know if programming.reddit.com can be a scientific gauge of how popular Lisp is, but it seems to be mentioned increasingly often.

Of all the problems Lisp has, parens are much less than the problem that it takes a gigantic effort to use Lisp with any popular libraries, once you step outside of the way that the larger community is using it.


Noise != usage. Don't confuse attention with popularity. I think about the language and post comments about it on forums and read great Lisp code (e.g. Arc), but I'd never actually use it for anything non-trivial. Why? Because I write web apps and existing Lisps don't make that easier or more fun.

The only (!) thing Lisp has to do is efficiently and absolutely abstract away XML, CSS, and JavaScript. No Lisp is going to get any uptake otherwise. We could do this with Arc.


I'm interested in why you think that Lisp systems haven't yet efficiently abstracted away from HTML/CSS/JS. I work on such a system every day and the abstractions seem pretty efficient to me.


Maybe they have. What system are you using?


I use Parenscript and LML2. There are a profusion of such libraries (although for CSS I wrote my own simple thing). I'm very happy with the abstractions I'm able to layer on top of them with macros.

One thing it took me a while to realize about the Lisp world is why there tends not to be standard frameworks or libraries for some of this stuff. The answer is that, for certain kinds of problems (basically, anything involving metaprogramming), it's actually easier to solve your problem yourself than to learn someone else's framework and then use it to solve your problem. A good example would be unit testing... there are probably a dozen unit testing frameworks, none standard, none much used. The reason is that the frameworks don't add much value over what a programmer can easily do for himself, with all the advantages that implies. I think this may also be why web app frameworks are less prominent in the Lisp world.

This is not the case with all libraries, of course. CL-PPCRE adds huge value, to pick an obvious example, but not in the class of problem I'm talking about.


Correct me if I'm wrong, but Parenscript doesn't abstract away HTML, CSS, JavaScript, it just lets you write HTML, CSS, and JavaScript using a Lispy language.

This is very different than abstracting away HTML, CSS, and JavaScript, which is where I think the real problem is.

I shouldn't have to learn 3+ of different languages (HTML, CSS, JavaScript, etc), AND 3+ different variations of each (Mozilla, WebKit, IE, etc), AND multiple versions of each, AND another abstraction layer (Parenscript) just for basically macros, just to write a web application.


It seems to me we've had this discussion before! No, it doesn't abstract these things away in the sense that compilers abstract away machine language. That is, you still have to know about them. Is that what you mean?

If so, it's a fair distinction. Probably it would be more accurate if I said abstract over rather than abstract away. We can get rid of a lot of repetition this way, but not a lot of the details that remain after that. Still, that's a big deal; better than anything else I've seen by far. So while "abstract away" may be an overstatement, I think that "just lets you write HTML, CSS, and JavaScript using a Lispy language" is an understatement. It doesn't just let you write those things - you don't need Lisp to do that. Similarly, saying "just for basically macros" (my emphasis) reads like an oxymoron to me. Macros are a big deal!

If you can do better, I definitely want to know. But my definition of "better" includes being usable in a standard browser, and "usable" includes performant.


Weblocks does this (sort of). I've just read about it, not actually used it much.

http://www.defmacro.org/ramblings/ui-dsl.html


I can't help but think that people who want to replace HTML+CSS+JS haven't written a widely used web application before, because the idea seems so absurdly impractical and would become such an incredible time sink.


I'd tend to agree on those who want to replace HTML/CSS/JS with some other abstraction. What I'm suggesting is that they're arbitrarily divided and should be unified under one syntax, which then compiles into the served "bytecode." JSON would probably make the most sense. But, as you suggest, it would take some time to get it right. So I'm sticking to doing it by hand, all divided up, for now. Maybe after our startup launches...


I think even just MzScheme could become viable if the untyped stuff gets a bit more polished and there is just a lib everyone can agree on called "pragmatic.ss" that pulls in the relevant SRFIs.

Most of lisps problems these days stem from social issues and logistical issues. The core tech is excruciatingly sound, and the language itself is great.

To someone out in the startup circles, it almost seems like Lisp and Scheme focus more on being a standard than being a pragmatic and global platform. While noble, this is not the path to massive popularity. Some people might be okay with that, but it seems to me like you could serve both goals and end up with an overall better product.


should be unified under one syntax, which then compiles

You do get that this is exactly what Lisp hackers who build web apps do?


CSS is very difficult to replace with an abstraction because it's too quirky. Compiling a high level language down to CSS without human intelligence is so difficult, it's completely impractical.

HTML, on the other hand, is easy to abstract. I'm glad I did it in Weblocks, it saves me an enormous amount of time. ASP.NET uses this strategy (use the DataGrid control for a while, and then try coding without it).

JS is somewhere in the middle.


I can't help but think you haven't written a sufficiently complicated client side web application.


Solid debugging, perhaps with things like breakpoints

I was actually complaining here about this a few days ago. What activity on this item are you referring to?

Breakpoints as such are probably superfluous, since it's so easy to insert "(break)". But I wish I could single-step through s-expressions and have them highlighted in the editor, and I wish there were an easier way to inspect state. Actually it doesn't seem that sldb is so far away from what I want (with the exception of single-stepping and highlighting), yet I always find myself reverting to the equivalent of printf debugging.


You can already single-step and highlight the piece of code you're stepping (assuming you use Slime/sldb). You just need to proclaim a debug level 3 before compiling your software.

http://www.sbcl.org/manual/Debugger-Policy-Control.html#Debu...


Glad I looked back at this thread. Thanks for the tip - I will check it out.

Edit: I seem to recall sldb works better with SBCL (which I'm not using). Perhaps that's my problem?


I think the problem Lisp has is that very few people are forced to use it. If you look at the average programmer, he doesn't want to think about programming, he just wants to press keys an go home. So a language that requires you to think instead of type is not something that he is going to flock to.

I think the sad reality is that most programmers aren't very smart, so smart languages will never become popular.

I'm not sure why it matters though, there are plenty of CL libraries (and jobs), so if you like lisp you can get by just fine.


"If you look at the average programmer, he doesn't want to think about programming"

This is nonsense. You can't just 'press keys and go home' and make working software. Most of the programmers I've worked with thought a great deal about programming.


What makes Lisp a "smart" language?


Metaprogramming capabilities. Most people seem to prefer keyboard macros or cut-n-paste to actually writing language-level macros.

Anyway, Lisp (and many other languages) gives you a lot of tools to make programming more thinking-intensive and less typing-intensive. Since typing is easy and thinking is hard, it follows that many people will prefer the language that makes you type.

JMHO.


I suspect there's a good bit of truth in your statement. I feel compelled to point out that it's incredibly short-sighted of programmers to think that way. Programmers spend a lot more time reading code than writing it, and it's hard to read code that doesn't use the right abstractions.


There's the problem of how you define readability. Many people evaluate it in a per line basis, rather than comparing pieces of code that achieve equivalent functionality. Measuring readability like this is unfair to languages that favor terseness beyond that of natural language.

In order to understand good Lisp code you need to learn new tools for expressing procedural concepts. Tools that are better, for that task, than natural language. Most people would think there is no such thing. Most people will always stick to whatever feels more intuitive at each stage of their formation as programmers. This is often a handicap to learning to think in more powerful ways.


Good point!


while this might be true, i think focusing on this stuff is wasted time. i'm not a lisp programmer, but from reading about it here at hacker news, i'm pretty sure that the lisp community would only be hurting itself by taking the average blub programmer into account.


I think most people prefer languages that let them think about the problem they are trying to solve, rather than think about what the macros expand to, or what happens when they call that continuation.


Sure. Nobody would ever write a macro to solve a problem.

No wait: on second thought, I surrender to how completely dumbfounded I am by this comment.

I'm calling Kenny...


The point is that "requires you to think" is not a feature, it is a bug, because that thinking could be put to better use on whatever it is the program is trying to solve. Everything else being equal, this means that language features that can be used without much thinking will do better than language features that can't.


So programmers who work with objects and methods don't have to think about them, while programmers who work with macros do? Sorry, but this just seems ludicrous to me.

Perhaps what you mean is that when a construct is unfamiliar to a programmer then they have to think harder about how to use it?

http://www.google.com/search?q=blub


I am saying that working with objects and methods requires less high-level cognitive processing than macros and continuations, not that you don't have to think about them. This is because the brain can treat the objects largely as if they were physical objects. With macros that is much harder.

No, I don't mean unfamiliar. Things like parentheses and prefix notation are unfamiliar to many people, but once learned, they don't require much more cognitive processing than other notation. There are several things in Java that are similarly weird until you learn them.


As far as I can tell, you're talking about your experience of what's easier and harder. Your experience is not "the brain".

Certainly if you're habituated to the object-model style of programming, other forms of abstraction may seem weird. The irony here is that twenty years ago, the people making your kind of argument were directing it all against objects.

As a side note, treating programming objects as if they were physical objects works as long as there's a good fit between the two. As soon as you need the objects to behave in ways that physical objects don't (and believe me, in any complex OO system, you will), you find yourself tied up in knots that it will take a lot of thinking to extricate yourself from. (Speaking of complexity that isn't intrinsic to the problem...) Not accidentally, the tools that people reach for then tend to be ersatz versions of metaprogramming (reflection, code generation), and Greenspun is off to the races...


"I am saying that working with objects and methods requires less high-level cognitive processing than macros and continuations, not that you don't have to think about them. This is because the brain can treat the objects largely as if they were physical objects. With macros that is much harder."

It's not. It's just a matter of practice. Most people find OO modeling natural (even when it is extremely tortured and difficult) because they've acquired a comfort with it born of many hours of practice. You might argue that you're not referring to familiarity with the paradigm, but I think you are, and don't realize it.

I assure you that a functional style has equal power to abstract as a OO style. You're right, our brains have a good ability to think of things as sets of objects. But the lisp style leverages the linguistic propensity to give words special significance. We're also very good at learning languages and parsing sentences in context, so the lisp linguistic style leverages similar natural cognitive abilities.


I am familiar with functional programming. In fact, for some reason recursion and higher-order functions always seemed easy to me (unlike pointers and virtual methods), but I think that's a quirk of my brain. Many other people seem to have a lot of trouble with the idea that you can just make a recursive call and trust that it's going to do what it is supposed to.

Not me personally, though.

But the lisp style leverages the linguistic propensity to give words special significance. We're also very good at learning languages and parsing sentences in context, so the lisp linguistic style leverages similar natural cognitive abilities.

Linguistic processing does require more cognitive effort, especially when it involves something like a macro that can change the context in which it is invoked. For exactly the same reason, most people are more productive with a graphical user interface than with a command-line one. This is directly related to the fact that abstract linguistic processing is more difficult than manipulation of the tangible, physical objects you find in a GUI.


Most people find OO modeling natural

I can't resist pointing out again how laughable this sounds to anyone who remembers the long debates about how hard it is to "think in objects" and how objects "would never work for the masses". (Actually, that was right, because the "most people" in question are still largely writing much the same code as they always did and always will; now they just surround it by "public class".)

I'm not disagreeing with you at all; I just feel compelled to make sure that somebody else sees this irony!


Maybe it's just because I've been using Common Lisp exclusively for a year, but what's so hard about macros? Especially, using them (as opposed to writing them).

You don't have to think about how the macro is written or what it's expanding into when you use it, you just have to know the semantics of the macro. "When I pass such and such, it will produce code that evaluates this in that way". Similar to a function or a language primitive right? If you're writing a for loop in Java, do you have to care how it's implemented in the JVM?!

As for writing macros: you're writing a function that takes an AST and returns an AST to use as replacement. This function will be called at compile-time to change your nice high-level macro call into lower-level code. There are a couple pitfalls (variable capture, multiple evaluation) but they're always the same and you'll learn about them quickly enough that it will become a sixth sense.


right. to me macros are just a different style of function. i don't see why people would be spooked by them


Or what happens in that subroutine.

Or the performance characteristics of that library.

Or the fencepost cases of a loop.

Or which exceptions to handle.

...


You're looking at a picture of a single circuit the size of a large office building, built to a precision of 100 microns.

I agree that this is impressive, but remember that you get to cheat: You build the thing, measure how far off you are, and stick delay lines in the short branches until everything lines up. Admittedly, if you're off by a hundred meters the required delays would be embarrassingly large, and the architect would get teased a lot, but even then I don't think it would pose an insurmountable problem.

Meanwhile, if the essence of your argument is that Lisp was used by mainframe guys and wasn't pushed hard enough in the microcomputer world, I can't help but agree. God, if only I could travel back in time and replace all those useless Pascal books I read as a teenager with Lisp books...


>You're looking at a picture of a single circuit the size of a large office building, built to a precision of 100 microns. I agree that this is impressive, but remember that you get to cheat: You build the thing, measure how far off you are, and stick delay lines in the short branches until everything lines up.

they call that Agile Microwave Engineering

oh, and you must have two microwave engineers working on each delay line.


There are many hypotheses out there that Lisp lacks popularity for this or that flaw in the language or in the community. The problem with these hypotheses is that you can point to similar or bigger flaws in languages that have grown popular. If a language has good qualities, people will figure out ways around the problems.

My hypothesis is that Lisp is unpopular for the exact same reason some of us love it: concise code. Everybody praises concise code, but humans love looking at things that have repeated patterns. The more you compress information, the fewer repeated patterns there are to look at.

I wish PG well with Arc, but I don't think popularity will come from fixing anything that's wrong with Lisp. By my hypothesis, Lisp only becomes popular by fixing what's right with it.


I'll throw in my 2 cents... Up until recently, one "flaw" that Lisp had that other languages didn't was one obvious best implementation. If you want to work in Perl, Python, Java, Ruby, etc. you knew where to get them, and you know they would work similarly across platforms, so programs you wrote on your Mac would work on Windows and Linux, etc. With Lisp, different implementations did the "interesting" stuff (threads, sockets, FFI, etc.) differently, and hackers didn't want to invest time learning one implementation if everyone else decided that a different one was the best. No one wants to bet on the loser (Blu-Ray vs. HD-DVD), so they stay away until a clear winner emerges. It looks like SBCL is becoming that best implementation, and once threads work on Windows so the web frameworks will "work", that will help greatly.

Another barrier is having to learn emacs/SLIME to have a decent development environment. That is a big piece of yak shaving for the average developer. Hopefully Cusp in Eclipse or something else will help.


Tell me, where's the clear winner between ruby and python? Between python and PHP? Between PHP and Perl? Between OCAML and Haskell? Plenty of languages have managed to rise in popularity through a time when you didn't know if any of your code might run on the future winner. At least with Lisp implementations you know some of your code will work with the winner. "Waiting for a clear winner" is not a reason for Lisp not to grow in popularity.

Perl, PHP, Python, Ruby -- none of these emerged from the womb with a full-grown development environment. They managed to grow to the point where people wrote Eclipse plugins or what have you. Lack of a "mainstream" development environment is not a reason for Lisp not to grow in popularity.


Unless you're implying that each Lisp interpreter is its own language, I don't think your contention really applies. If I learn Lisp on Lispworks, and go sit down at GNU CLisp, I'm not programming in a different language - I'm using a different sub-set thereof. When I sit down to a Ruby program, and then switch to a PHP script - there are fundamental differences between the two languages that transcend the library or features either provides.

Ralphc's point wasn't about there being a clear winner in the dynamic language space, but that each of the languages mentioned had one canonical interpreter.


The issue here is human decisions, not a taxonomy of programming tools. Or do you suggest that Lisp dialects diverge further? If you called them separate languages, would there be a better chance of them growing popular?


On the contrary, I'm suggesting that Lisp dialects do not diverge as far from each other as the various dynamic languages do. That means that the idiosyncrasies each dialect has are going to be more glaring. The other dynamic languages don't (yet) have dialects, so you don't have to take the design decisions of the interpreter into consideration - after you've decided on the language.

In that case, I think that there might be a better chance of a sub-set of Lisp dialects becoming popular if they were described as separate languages. It will be interesting to see how Arc develops from MzScheme, and whether or not it becomes more popular with non-Lispers.


Perl, PHP, Python, and Ruby are all interesting failures. They are important for their effect on languages that will last longer (Lisp, ECMAScript) though.

Edit: I don't mean these are "failures" now, just that they're doomed in the long run. Does anyone think they'll be able to stand any of these languages in 2018? I don't they'll have evolved much by that point either. The older a language gets, the harder it is for it to evolve. And if it tries to make too big a leap, people simply don't go for it (PHP5, Perl 6).


Like it or not, I don't think you can call any of those languages failures. Certainly not PHP, Python, or Ruby. I would venture to guess that all three are more commonly used than Lisp.


Failure in the Grahammian sense of being a dead-end.

"I think that, like species, languages will form evolutionary trees, with dead-ends branching off all over. We can see this happening already. Cobol, for all its sometime popularity, does not seem to have any intellectual descendants. It is an evolutionary dead-end-- a Neanderthal language." - http://www.paulgraham.com/hundred.html


Though I disagree with most of the conclusions in that essay, and in particular the expectation that Lisp (or a direct descendant) is going to be the language of the future, I still think it's way too early to call the above languages failures, even in the "Grahammiam" sense.

Python and Ruby especially are growing in popularity, and I think it's awfully premature to consider them evolutionary dead ends at this stage in the game.


I don't necessarily agree that those languages are dead either. I was just clarifying the point I thought he was trying to make. Plus I wanted to turn pg into an adjective :)


I mean failure like the tyrannosaurus rex. That is, from an an evolutionary perspective. Not at the arbitrary present. And I'd wager that Perl is still actually more used than Python or Ruby, it just doesn't get the love on social news sites and elite blogs.


I think Perl has been relegated to short scripts and legacy code bases. I don't see a lot of new projects being developed in Perl. That's what I meant by leaving it off the list.


I really don't find that lisp, at least common lisp, is particularly concise. Consider lambda functions, one of those things that lisp is particularly praised for;

   common lisp: (lambda (x) (+ x 1))
   C#3        : x=> x + 1
   ruby       : { |x| x + 1 }
   python     : lambda x: x + 1
   Haskell    : \x -> x + 1
   
Just in terms of character count, CL is the longest of the lot.

No, the problem with trying to get a lisp under your belt has alredy been mentioned. Compared to the alternatives;

- choosing an implementation is a research project (clisp? sbcl? allegro?) - documentation is weaker - libraries are harder to find and install (ASDF vs Rubygems, Python eggs, etc.) - emacs/slime is an investment to learn - examples are few and far between - the language itself (the words, not the punctuation) is inelegant (mapcar, setq, cdadr)

I've tried really hard to learn lisp, and it turns out that actually, while there may be some platonic lisp that IS great, actually programming common lisp, now, is a massive effort in struggling in the dark.


In my experience, Common Lisp often doesn't come out best at the code snippet level. Where it shines is in building whole systems. After a while one passes an inflection point where one realizes, "Holy cow - what I'm doing now is supposed to be way harder than it is". I probably have this experience every week.

You obviously put a lot of sincere effort into learning Lisp and I agree with much of what you say. After all that work, I wish I knew an easy way to communicate the joyful side to you.


The best demonstration of that "big system" effect that I've seen, that's easy and small enough to read and comprehend, is the database[1] or unit testing[2] chapters in Practical Common Lisp. They show good examples of using macros and HOFs to completely remove boilerplate code, and that it's simple enough to do it for even simple boilerplate.

[1] http://www.gigamonkeys.com/book/practical-a-simple-database....

[2] http://www.gigamonkeys.com/book/practical-building-a-unit-te...


Unfortunately that is not really what people mean with "big system" - although I do agree with your point. It shows how nice it can be when you add on complexity.

I really should get that book in paper form.


Yeah, but the bigger the system, the harder it is to create a non-IP protected, easily digestible example.


Which means you have to rely on he said/she said, hearsay and other totally non objective accounts.

To find out for myself, I plan on one day doing something large and messy with arc. I pick arc cause I figure in the eons it will be before I get around to it, arc will still be around (s/arc/a lisp/)


It may well be that it only really becomes apparent on larger projects. It's hard to get to that point, though, and maybe that's why the takeup is low.

OTOH, some languages are particularly good on small things, but don't offer much help for large problems.


Just out of curiosity, did you like working in Lisp? I mean, if you can abstract away from the annoyances you cited, did you actually enjoy the Lisp mode of programming (basically, writing syntax trees)?


I did, and I didn't.

The "I did": It seems very sensible to use s-expressions for everything. That lispy syntax meant that I started seeing ways to encode almost everything as lisp. I even started keeping a todo list in s-expressions;

(do (buy bread) (tidy (kitchen living-room bathroom)) (get life))

The classic XKCD cartoon had it perfectly: "I felt a great enlightment. I saw the naked structure of Lisp code unfold before me. The patterns and metapatterns danced. Syntax faded"

I didn't: when it came to actual common lisp, cracks started to appear. The crazy mini-language for looping. The arbitrary-sounding names. The feeling that, if I ever wanted to do anything remotely windows-specific I would face years of horror.

I've blooged about it more at http://www.stevecooper.org/2008/02/13/impractical-uncommon-l..., if you've got too much free time. ;)


I read your post. So, you appreciated the core beauty of the thing but the details got you down. What a shame.

I think what happens to most of us who love Lisp is that, at a certain point, the initial frustrations are eclipsed by how much more productive we are (and liberated, if I may put it that way). That point comes sooner for some than others. In your case, not soon enough. But Lisp will no doubt still be around if you ever do decide to get back to it.


i never really touched lisp until Arc. i knew lisp was the language i wanted to be programming in, but common lisp feels mainframey and scheme has an ivory-tower anal aura about it

once Arc gets its own independent implementation i will probably start using it more heavily and seriously


You can shorten (+ x 1) to (1+ x), and if you get annoyed typing "lambda", it's not that hard to change it to something shorter like "fn". Also, CL has a lot of built-ins so I don't find I need to write lambdas nearly as often as in other languages. In this example, just say "1+".

Another thing you'll find is that for trivial problems, Lisp might take a few extra characters, but it scales much better. Try making your Python lambda print the value of x first. Try telling your Ruby lambda that x (and x+1) fits in a machine word so it can compile the function into only a couple of CPU instructions. Try writing a metacircular evaluator in C#. :-)

When you're just getting started, in any language, it doesn't really matter which implementation you pick. (FWIW, CLISP is byte-compiled, SBCL is a native compiler, and Allegro costs $600. Not unique to Lisp: you can find expensive proprietary C++ compilers, too.)

As a Lisp and Ruby programmer, I disagree that Lisp's documentation is weak. I would be very happy if Ruby's docs were even half as good as Lisp's.

I type "sudo apt-get install library-name" for both Lisp and Python libraries, so no difference there. (Ruby gems are less frequently packaged, and aren't compatible with the FHS, so they're a little harder.)

Lisp examples seem to be more often found in (dead tree) books, while Ruby and Python examples are more often found on the web. Since Ruby and Python libraries (and even syntax) are still in flux, and Lisp predates the web, this kind of makes sense. If you can't buy a couple books, it can be rough, though some good Lisp books are becoming available on the web for free.

Those symbols you picked are a little weird, but when's the last time you actually saw a cdadr? It is at least consistent (which is more than can be said for some languages!). I don't find "unshift" or "__gt__" any better.

I don't want to sound like a pure apologist. I admit that Slime is an investment (if you don't already know Emacs). Apparently there is a Lisp plugin for Eclipse, but I know nothing about it. It is a shame more IDEs don't support Lisp well, though I can see why they don't.

Finally, it could be simply that Lisp doesn't map well to your mind. (I don't mean this as an insult -- everybody's different. Maybe Smalltalk or Prolog or Sisal is your cup of tea.) I don't think Ruby maps well to mine, though I know some people who seem like Matz reincarnate -- which could explain why the docs are so sparse! The core of programming, to me, is having fun building good abstractions, and if Lisp isn't doing it for you, by all means, find (or invent!) something that does. Cheers!


Hi, Ken. Thanks for the reply.

I think what's happening currently in programming is that lisp-isms are filtering through to other languages. What used to be unique lisp-juice is available more generally, so the switch to lisp isn't perhaps as compelling.

Anyway. I'm off to write that metacircular evaluator in c# ;)


> - choosing an implementation is a research project (clisp? sbcl? allegro?) - documentation is weaker - libraries are harder to find and install (ASDF vs Rubygems, Python eggs, etc.) - emacs/slime is an investment to learn - examples are few and far between - the language itself (the words, not the punctuation) is inelegant (mapcar, setq, cdadr)

- Exactly.

Last year I started reading about Lisp etc, and wanted to play with it a little bit.

These are the exact same obstacles I faced.

If you ask about "IDEs" for Lisp, you're met with downright hostility for not embracing the glory of emacs with open arms - anything else is just unfathomable.

Well, what if I'd like to just, you know, start using the language, and not learn a whole new world of an editor first?

"Why does emacs + SLIME have to be the only viable option?"

That's not a question you can ask, of course, but the answer would be something like "roll your own" anytime you need anything.

(Oh, and Xach is a prick. He and Krysztov(or whatever) are practically high-fiving each other on IRC every time they alienate another new would-be Lisper)

As for the implementation research, I sort of came to the conclusion that SBCL was the way to go, but documentation (for anything?) was pretty scarce, and ASDF feels like an obstacle too.

Later on, I realized that Scheme is more elegant as a language, but it's more or less in the same situation as CL.

Compared to these two, something like Python feels much more accessible to me (being a novice at all), because I get the feeling that I can just start using things whenever I want to make that effort.

But that's just it - with Python, there's only the effort of learning the language as you use it.

You learn the standard library as you go, and suddenly you notice you've accomplished a lot of things and had fun while at it.

- At least that's how I imagine it will be when I get around to working on my own super-secret web-startupy project.

With Lisps, you need to make a lot of effort just to get to the beginning, and that's a big problem.


Clozure Associates has an open source Common Lisp with an IDE for Mac OS X. We're also working on the documentation. We're specifically working to remove the speed bumps for new Lispers. Check out http://www.clozure.com/clozurecl.html. We plan to have IDEs for other OSes too in the future.


You can use CUSP (an Eclipse plugin which provides similar functionality to SLIME for EMACS), and the commercial impls have their own (admittedly EMACS-esque) IDEs. There's even a SLIME-derived plugin for vi! It really is worth the effort to learn EMACS, though.

I'm unconvinced that installing lisp libraries is so very horrible; I've had far more trouble with python ones, myself.


I even tried CUSP back then, but it was somehow confusing (or refused to work), I don't remember.

Maybe it's because I don't normally use Eclipse.

If someone is interested in other alternatives, here's one: http://phil.nullable.eu/

- Phil's "ABLE" editor. I tried it too, but it didn't work too well on Windows (something about CLISP, iirc).


m-x slime (starts slime and sbcl) c-x-2 (split window in two; one for coding one for REPL)

..open a file at the top window by pressing c-x-f and typing in hello-world.lisp <enter>

..yay, start coding..

c-c-c to eval forms c-c-k to eval/compile the entire file q to close down "popups" (exceptions .. etc.)

yawn .. what's the problem? .. are somewhat, "limited"?


"I've tried really hard to learn lisp, and it turns out that actually, while there may be some platonic lisp that IS great, actually programming common lisp, now, is a massive effort in struggling in the dark."

The weedout process has worked its magic again ;P

You usually get rewards proportional to the effort... I don't think lisp is beyond anyone's reach if they work faithfully on gaining proficiency in it for 3 months, maximum. 3 months is nothing if it makes you a much better programmer, and/or you get to use this awesome language.

I think it's important to first research lisp's benefits. Of course you won't be willing to spend the necessary efforts if you're not convinced... I know I just couldn't resist the appeal of closures, macros, CLOS (multiple inheritence, multimethods, class redefinitions) the condition system (restarts in particular), generalized variables and a bunch of other fun stuff.


"The weedout process has worked its magic again ;P"

Please don't call me a weed.

You point out that the more effort you expend, the better you get. The problem is the rate of reward. Getting a productive lisp environment is hard work. Competing implementations, sometimes-compatible libraries, very little documentation -- these are things that make lisp, holistically, hard to adopt. Every other production language makes it easier to get to the programming. I can't think of a worse-supported production language than lisp.

Also, the benefits you mention are either not that uncommon, or that valuable. The only thing lisp has as a unique feature is it's macro system, thanks to it's syntax. It's the only thing that can't be adapted straight into another language. Otherwise, the features you mentioned are basically available elsewhere; Visual Basic has closures. C++ has multiple inheritance. Python has multimethods. Ruby has class redefinitions.

The thing is, I wanted to believe. I stocked up my library with PCL, SICP, the little schemer, and On Lisp. I learned emacs. I stuggled with asdf. What I found, though, did not seem to live up to the promise of a hidden pearl.


"Please don't call me a weed."

Sorry :(

"You point out that the more effort you expend, the better you get. The problem is the rate of reward. Getting a productive lisp environment is hard work. Competing implementations, sometimes-compatible libraries, very little documentation -- these are things that make lisp, holistically, hard to adopt. Every other production language makes it easier to get to the programming. I can't think of a worse-supported production language than lisp."

I agree.

"Also, the benefits you mention are either not that uncommon, or that valuable. The only thing lisp has as a unique feature is it's macro system, thanks to it's syntax. It's the only thing that can't be adapted straight into another language. Otherwise, the features you mentioned are basically available elsewhere; Visual Basic has closures. C++ has multiple inheritance. Python has multimethods. Ruby has class redefinitions."

Well, it's uncommon and valuable to have all of them in a coherent language. Especially since they're pretty orthogonal features that complement eachother well. The whole is greater than the sum of its parts.

"The thing is, I wanted to believe. I stocked up my library with PCL, SICP, the little schemer, and On Lisp. I learned emacs. I stuggled with asdf. What I found, though, did not seem to live up to the promise of a hidden pearl."

I'm sorry that you've come to the conclusion lisp is not right for you. Maybe in a couple years you can try again and most implementation/library/documentation/etc issues will be resolved?


"Sorry :("

Nah, that's ok. ;) CL isn't really for me, now. As I said, common lisp, right now, is too much work. Who knows what Arc 2020 will be like? ;)


"Otherwise, the features you mentioned are basically available elsewhere; Visual Basic has closures. C++ has multiple inheritance. Python has multimethods. Ruby has class redefinitions."

Exactly. Common Lisp is close to a superset of the features of other programming languages.

Even in cases where Common Lisp lacks a feature, if it's something that can be expressed by changing syntax, macros can get you pretty close. I remember a thread on comp.lang.lisp where several people had a go at adding pattern matching to Common Lisp, for example, and were able to get a pretty long way towards that goal in a short amount of time. Another example: seems like half of the Lisp books out there present an implementation of Prolog in Lisp.


According to Larry Wall:

"use Lingua::Perligata;

If you allow a language to mutate its own grammar within a lexical scope, how do you keep track of that cleanly? Perl 5 discovered one really bad way to do it, namely source filters, but even so we ended up with Perl dialects such as Perligata and Klingon. What would it be like if we actually did it right?

Doing it right involves treating the evolution of the language as a pragmatic scope, or as a set of pragmatic scopes. You have to be able to name your dialect, kind of like a URL, so there needs to be a universal root language, and ways of warping that universal root language into whatever dialect you like. This is actually near the heart of the vision for Perl 6. We don't see Perl 6 as a single language, but as the root for a family of related languages. As a family, there are shared cultural values that can be passed back and forth among sibling languages as well as to the descendants."

I'd say Perl is getting comparable macro-level functionality. Perl also has one likable feature that is less present in lisp; different idioms look different. When I hear Paul Graham talk of (a i) being an array index or a function call, I start to think about concepts like Hungarian notation to keep my code more readable.

A uniform syntax structure isn't required for macros.


> "so there needs to be a universal root language, and ways of warping that universal root language into whatever dialect you like."

sounds like XML

> "A uniform syntax structure isn't required for macros."

it makes it a lot easier!

i don't understand what you mean by 'pragmatic scopes.' does perl 6 have macro functionality?


I would take 3 months to learn a new language, but then my startup would be dead.

I picked up Python in a day. I'm quite proficient at it a year later. A simple syntax + a great standard library is the best way to get things done, if not the best way to learn to write software.


Make it: 3 months full time, plus whatever amount of time is needed to become deeply conversant with all the underlying lambda calculus concepts.


You're saying nobody can become proficient (not "deeply conversant") in lisp in 3 months because I can't become deeply conversant in lambda calculus in 3 months?!

nobody, proficient, lisp. I, deeply conversant, lambda calculus. You're comparing apples and oranges.

I learned lisp (pretty deeply, I'd qualify myself as intermediate then and now) in what... 5 or 6 months? And I was alone. And I didn't know emacs. With some coaching and pointers I'm pretty sure I could have made it in 4. Aren't you happy?


Well, we have 2 data points now. I guess we can agree that it's harder to learn than Python :-)

For what it's worth, I think learning Lisp is useful for getting (re)acquainted with lambda calculus if nothing else.


lisp doesn't strike me as a difficult-to-learn language. it actually seems the easiest of them all, definitely not harder than python to pick up. c++ is at the other end


The power of a programming language is proportional to its capability for innate abstraction. If that's true, it follows list-oriented languages are inherently inferior to their hashtable-oriented brethren. (These orientations are often misguidedly referred to as "functional" and "object-oriented" paradigms, which I find to be useless, over-overloaded terms.) Basically, with list-oriented langs the primary abstraction is a tree, whereas with hashtable-orientation, it's a graph. I'm talking about the -primary abstraction- (i.e. what you "think in" when hacking); obviously you can implement any structure in any powerful enough language. If others don't see it this way please, illuminate me.

Lists confine one to rigid hierarchies, which have to be compensated for with dirty (but sexy) hacks like in-language macros. Meanwhile the index of hashtables is arbitrary, which allows you to do naturally the things you have to patch in Lisp.

Though hashtables are more powerful and easier to grok, for one reason or another nearly all (popular) hashtable-oriented languages are total crap (C++, Java, C#), but in some regard a step up from deformed languages like C (in all seriousness, how does anybody get by without first-class functions and hashtables?). My tentacles have only found two decent hashtable-oriented languages: JavaScript >~1.5 and Io.

Anyway, why is Lisp unpopular? Because it's harder for most programmers to think in lists than hashtables. Then, why does the lang have a cult following? Because it's well crafted and consistent, which seems to cause some people to overlook its shortcomings, even to the extent of seeing design flaws as features.

But as far as I'm concerned the Language War is over anyway. JavaScript won.


My tentacles have only found two decent hashtable-oriented languages: JavaScript >~1.5 and Io.

I'm not sure whether or not you'd find it "decent", but you might consider adding Lua to your list. It's small, relatively fast, and uses hashtables as its composite data structure. It also has some other neat stuff, like tail-calls and coroutines.


Thanks for the tip. I've heard some good things about Lua, but never gone further than Wikipedia.


I will second the suggestion of Lua. Here's my raw beginner's introduction to Lua tables:

The basic structure is the hashtable. You can use any first-class value as a key (number, string, function, another table) and similarly any first-class value as a value. Creating a table is done with the table constructor "{ }" ('> ' is the repl prompt):

  > a = { }
So now we have an empty table named 'a'. Let's say we wanted to have a table containing a list of colors - this is represented in Lua as a table with ascending integers as the keys. (Starting at 1, rather than 0, which is a bit unconventional)

  > a = { 'red', 'green', 'blue' }
This is equivalent to saying

  > a = { [1] = 'red', [2] = 'green', [3] = 'blue' }
If we use strings as keys, we start seeing some of the syntactical sugar Lua offers. Let's look at favorite foods:

  faves = { bob = 'pizza', george = 'cake', mary = 'pie' }
Note that no quotation marks are needed around string keys. (Well, unless the key is a language keyword. That's a bit annoying, but is related to the single-pass compilation, which is valuable. { if = 'can't do it' } fails. { ["if"] = "can do it" } succeeds.)

We can use numeric indices along with string ones in the same table:

  mixed = { 'a', 'b', 'c'; state = 'NC', city = 'Charlotte', county = 'Mecklenburg' }
If we want to access values from a table, we can use a subscript notation.

  > print(a[1])
  red
  > print(faves['bob'])
  pizza
Here's a winner though. If we want to subscript a string, we just use the dot notation.

  > print(faves.mary)
  pie
What happens if we subscript a nonexistent entry in a table?

  > print(faves.james)
  nil
No error is thrown, which is handy. Tables are defined as having the unique value nil as the value for all nonexisting keys. In fact, if you wanted to remove an entry from the table, you just set the key to nil:

  faves.george = nil -- the cake is a lie! And the key 'george' is removed.
One nice thing about Lua tables is that they are extremely regular. There aren't special cases in their behavior. They're easy to construct, inspect, and manipulate. They are the fundamental data type of the language, and everything is done in terms of tables. Objects are created out of tables. Namespaces are tables. Modules are tables. Configuration files are tables. It's an extremely clean and convenient design.

In addition to the tables, you get first-class functions:

  > function a() print 'hello' end
  > a()
  hello
Is syntax sugar for:

  > a = function() print 'hello' end
  > a()
  hello
These syntax sweeteners we've seen work together, too:

  > function a.foo() print 'world' end
  > print(a["foo"])
  function: 0x807f2e8
  > a.foo()
  world
Ok, we're almost to objects. The next sugar we see is the ':' notation.

  > a = { color = blue }
  > function a:fave_color()
  >   print('My favorite color is ' .. self.color) -- .. is concatenation
  > end
If a function is defined with the ':' notation then it has an implicit local value called 'self' which is set to the containing table.

  > a:fave_color()
  My favorite color is blue
At this point it's pretty easy to create simple prototype-based objects.

What brings even more power to the table is that we can define custom behavior on each one. We can set a 'metatable' which defines how the table responds to subscripting, the various mathematical operators, and being called in the functional position. In a quick script I worked on I implemented a prototype-based class system in under 20 lines of code, all with the power of metatables. Lua tables are very powerful and though similar to the ones in Javascript are even cleaner and more pleasant to work with.

Anyway, it's all nifty stuff. Other features you get: fast incremental garbage collection, first-class functions, asymmetric coroutines (isomorphic to one-shot continuations), lexical scoping, closures. Lua is also one of the fastest interpreted languages and has an excellent API for binding to C if you want to do something performance sensitive, or want to use a library written in C. The language itself is written in 100% pure ANSI C. It runs on everything from embedded microprocessors to mainframes, in lego robots and on space satellites. And the community is very friendly.

If you want macros, there are several community-run variants of the language with macro facilities... I haven't gotten very deep into those though.

Good documentation for getting started:

  http://www.lua.org             -- official lua home page
  http://www.lua-users.org       -- lua community wiki
  http://www.lua.org/docs.html   -- lua documentation
  http://www.lua.org/manual/5.1/ -- lua language reference
  http://www.tecgraf.puc-rio.br/~lhf/ftp/doc/hopl.pdf -- a fascinating article on the history and features of lua
Check Lua out and see how you like it. I find it to be a very pleasant language which is fun to work in.


And what about CLOS? Does not make Lisp, in your words, a "hash programming language"?

Really I don't get the difference you state between list/hash programming languages, I've work a little with EcmaScript and I don't know anything about Io, but I'll put it in my to-do list. Can you, please, extend it? What's the main difference between them?

Because you say that the primary abstraction of a "list" programming language is a tree, which I think not. Since a tree is a directed graph without cycles, you wouldn't have loops. So, at least is a directed graph. But then, which property has the "hash" language graph that the "list" language graph has not? It's an undirected graph? It think it is not possible, because if it's undirected how do make your program to go "forward" and not "backward"?

I hope I'm not too obfuscated and my questions make some sense.


I've never looked at CLOS. But, what I mean is the "dominant metaphor," which I suspect is still lists.

I'll just go through my probably plebeian understanding. Arrays are to lists as hashtables are to objects. An array, in my mind, is a list that only contains one type and is indexed with enumerated integers. On the other hand a list can contain any type, but is also indexed with enumerated integers.

In JavaScript:

  array = [1, 2, 3]
  list = ["one", [[array], 3]]
  array[0] == 1
  list[20] = 23 // list indices aren't necessarily a linear enumeration
Of course, arrays and lists are both technically Arrays in JavaScript (a bad naming choice; I'd have called them Lists). Now a hashtable is typically just a list that uses strings for indices instead of integers.

  hashtable = {
    "today":20080403,
    future:function(x) { return this.today + x }
  }
A "method" is just a value that happens to be a function. Usually hashtable-oriented languages choose to abstract away the string, and treat it as a variable.

  hashtable["today"] == hashtable.today
  hashtable.method(23)
Like with lists/arrays, JavaScript gets hashtables/objects almost exactly right, but again is subject to some questionable naming choices.

RE: trees and graphs -- I was getting at the relationships between nodes, not the actual computations, but I'm not comfortable enough with the terminology to explain exactly what I meant.


Ok, I think I get you. But then I think you can't categorize programming languages this way, the default data-structures provided by the language don't characterize it. In fact, it seems to me that you like most JavaScript because it's object oriented, it's dynamic typing and functions are first-class. Features that others languages lack, and not because the "dominant metaphor" is the hash. Casually in JavaScript the objects are built with hashes (at least apparently), and can be easily extended.

It's a fun thread :D


Built-in data structures are a big point of categorization of langs on my end. To me, the defining feature of Java is its classical structure. If you got rid of classes, you'd have a different language (in the interface sense). And interface is really all I'm talking about.

I'm basically just asserting that objects (should) == hashtables. This is quite literal in JavaScript. Other languages bend the metaphor in different directions, and obscure it to that no one even knows what "object-oriented" really means beyond particular idiosyncratic syntax in this language or that.

PG of course talked about this before, in Why Arc Isn't Especially Object-Oriented:

> I've done a lot of things (e.g. making hash tables full of closures) that would have required object-oriented techniques to do in wimpier languages ...

I'd argue that he was employing genuine object-oriented techniques, but just didn't have classical syntax and didn't consider what he had an "object." Other languages make a point about it, and use special syntax, which fogs the whole thing. Perhaps some people in "OO" mindsets have the kind of naivete that C-only hackers I've met have about first-class functions.

Actually, I just realized the whole reason C++, Java, C#, and co. have "methods" in the first place is just compensation for not having first-class functions you can stick in a hashtable.


If objects == hash tables, any language with decent hash tables, including lisp, has objects that you're perfectly happy with. Lisp also has other data types, but their existence means that lisp has more power, not less.


> But, what I mean is the "dominant metaphor," which I suspect is still lists.

That tells us more about the basis for your suspicions than it does about lisp.

It's okay to like javascript more than you like lisp. It's also okay to be mostly ignorant of lisp. However, it's poor form to make up things to "support" those positions.


"And what about CLOS? Does not make Lisp, in your words, a "hash programming language"?"

I think in the "hash" languages, you usually consider the method to "belong" to an object in some way.

With multi-method dispatch, the relationship between objects and methods is more fluid. So I think there is a difference here, too.

It's not that "list" languages only use tree structures for everything, just that trees are the more "natural" choice in those languages.


I think that your comparison of the relationship of lists to lisp and hashtables to OOP languages is rather superficial. Lists relate to lisp differently than hashtables (or full objects actually,) relate to languages such a Java or Ruby.

In those languages, everything is an object, that is everything construct you define, whether data or code stared as an object, a set a behaviors and properties. (Assuming it is fully OOP, unlike Java.)

In common lisp, everything is an object as well, with it's own set of properties and behaviors. Cons cells are basic, for example, but you can still add properties to them and define methods for them. Encapsulation is not enforced, but that's for what closures and packages are.

Lists are dominant in lisp for another reason: the language itself is represented in them, not just the data and the methods, but the pre-compiled language, so that you can use lisps meta-programming facilities to generate lisp code itself. In most other programming languages, the code is represented to the compiler as text and to use such meta-programming facilities would require string parsing. Macros aren't a 'hack' but an actual paradigm shift. Attempts to do the same thing in other languages have largely been very clumsy, witness C++ macros. (Template Haskell apparently has managed to do it properly though.) Nearly Lisp's entire syntax is for defining the structure of the code; everything else is done with operators, functions, and macros.

If languages like Java or Ruby were represented in hashmaps the same way that Lisp is represented by lists, they would probably be incomprehensible. if you were to attempt to use a structure to represent the language, it would probably end up being something of a tree format. Code, is naturally hierarchical, even class definitions, and if I were to do the same kind of thing that one does with Lisp in C++ or Java, I would end up using lists. So I don't think that this comparison is really correct.

(BTW, I love C. It's not deformed, it was designed like that to A- make it easy to implement and B- give the programmer as low a level access as he needed. You are meant to define your own data structures and implement them in an efficient way using algorithms that make sense for the usage. You are not confined to a preset, possibly inefficient implementation. This is a level of control not available in a lot of other languages which is why C is so commonly used to write interpreters and compilers for other programming languages. Those highly efficient Python hashtables are implemented in C (and maybe a bit of assembler))


While lisp code is represented as lists, other user data can be represented with hash tables, vectors, arrays, classes, etc, including lists. (Yes, the name "lisp" refers to lists, but the language has grown since the name was picked.)

Also, it's easy to represent arbitrary graphs with lists, in much the same way that you'd do so with hash tables, structs, etc. Yes, the way that a node refers to other nodes differs but there's no restriction on the relationship of the nodes or the overall structure. (The difference between different kinds of graphs has nothing to do with how one node refers to another.)

And, macros have nothing to do with any of this because they are "just" code that turns code into other code. Perhaps another code representation would be better than lisp's, but since few languages have one, and some of those that do break it with every release....

In short, Gordianknot's thesis and examples are wrong, he doesn't understand graphs, and he has no idea what macros do.


I was only talking about the code itself, the primary metaphor of the programming language, not what the language actually represents. Most langs that I've encountered are structured as semi-formalized strings (i.e. C-derived langs), where as Lisp is structured in "physical" lists. I shouldn't have talked about "abstractions," because that wasn't what I meant. I was getting at the "concretions" of the actual lingual interface.


> was only talking about the code itself, the primary metaphor of the programming language, not what the language actually represents. Most langs that I've encountered are structured as semi-formalized strings (i.e. C-derived langs), where as Lisp is structured in "physical" lists.

Huh? Let's review.

>>>If that's true, it follows list-oriented languages are inherently inferior to their hashtable-oriented brethren.

Javascript code is semi-formalized strings or ASTs.

The careful reader has noticed that lists that represent code are ASTs with context-dependent field names. Since the nodes provide the context, said dependence isn't a big deal.

I don't know how many javascript programs manipulate their ASTs. (Lisp programs with macros are manipulating their ASTs.) The vast majority of javascript hash table operations are on data. (Yes, lisp code can be data, but not all lisp data is code.) In that, they're no different than any other language that has decent hash tables, such as lisp.


I don't understand why Java is "hashtable-oriented"...

What design flaws of Lisp are seen as features?


Really what are classes other than sugary hashtables?


sets of related closures over common state?


Okay, and I would look to hashtables as the best way of describing these "sets."


I like Lisp a lot, and from my own experience I think that the main reasons why people dislike Lisp are prefix notation and parentheses.

Sure, my experience is only anecdotal and not at all large. But always I've tried to introduce someone to Lisp (I'm talking of about 5 people, none of them have converted...) they seem incapable to see further than the parenthesis and the prefix notation, but specially the parentheses.

I tried to explain them the benefits of being such a regular language, how the source code is a list which makes it really easy to do meta-programming, the power of macros, the fact that using a good text editor (Emacs + SLIME) with an excellent indentation you really don't have to care about parentheses: you follow the indentation not count parentheses!. Not to speak about the prefix notation, which they seem allergic to. But all have been in vain.

Lisp is so esthetically different from the other Algol-derived languages that it, simply, does not fit their brains, which I think is really a pity. Four of them are Perl programmers, being Perl their blub. They belive and act like there is no language better than Perl, they can do everything with perl. Whey I explain macros and metaprogramming they reply that Perl can do that with eval and strings. I've tried that and is, at least, sadomasochistic!

So, after this rant (sorry), I've to restate: the main problem people have with Lisp are parenthesis and prefix notation.

PS: Some of them also worship Microsoft and argue that Windows XP is the best operating system out there far better than Mac OS X (which none of them have tried for more than 5 minutes in a row), and that Linux is only good for severs. Some times I think that they suffer from severe brain damage, but apart from that they're pretty clever. So... it's an enigma.


The best retort to the parenthesis complaint is: "I like to ask people who complain about parentheses in Lisp if they are bothered by all the spaces between words in a newspaper." (http://smuglispweeny.blogspot.com/2008/02/what-hell-is-fortu...)


Well if you ignore the human visual processing system, that's a valid argument. One ascii character is just the same as the next.

Unfortunately for lisp, that's not the case. Whitespace is different than lines. There's a reason most people can't look past the parens - they're hard to look past.


I think this is a feeble excuse. That is, it's an excuse generated by a lazy mind. The first time I saw Lisp, I was not "primed" to dislike it. It was early in my programming education. I was fresh, as they say. It looked no more cluttered than Pascal, the main language being taught at the time.

Can I write really ugly and hard to visually parse Lisp? Yes. Bad code can be written in any language.


Parenthesis fade into the background with experience (especially if your code is well-formatted), in the same way you start to group things by curly braces in C or Java over time.

No one looks at C code the first time and thinks it's clear to understand.


The point I was trying to make is that whitespace and parens are not equally good as delimiters.

Specifically, reading a newspaper that has parens instead of whitespace would be harder.

(Specifically,(reading(a(newspaper(that(has(parens(instead(of(whitespace (would(be(harder.)))))))))))))

In terms of actually being a problem, I'd say it depends on the code, for sure. I could create that same example in C, but end it with }}}}}}}}}}}}}}. However, in my admittedly limited experience with lisp, it seems more likely to be highly nested, tempting people to put more levels on a single line.

I also agree with you that C shouldn't be held up as the paragon of readability.

Python is pretty darn good though.


But you can format and indent your lisp code so the reader (people) doesn't have to rely on counting parentheses.

Example from taken from http://www.gigamonkeys.com/book/

  (defun test-+ ()
    (let ((*test-name* 'test-+)
          (another-var 'foo))
      (check 
        (= (+ 1 2) 3) 
        (= (+ 1 2 3) 5) 
        (= (+ -1 -3) -4))))

You should be able to read what this code does (check basically runs the statements and reports if they return true or false) without having to count a single parentheses.


I agree that lisp can be made readable. And C can be made unreadable.

OTOH lisp encourages deep nesting, which makes it harder to be readable. Here's a rougly equivalent program, if I read the lisp correctly.

  void test-+()
  {
     *test-name* = test-+;
     another-var = foo;
     assert( 1+2 == 3 );
     assert( 1+2+3 == 5 );
     assert( -1 + -3 == -4 );
  }
I'm not saying that this is any better than the lisp equivalent - the lisp function is certainly more elegant. But because C is procedural/stateful, it's more natural to have flatter programs.

And flat programs are easier to make legible, because you're less tempted to put multiple things on a single line to avoid going another level deeper in your indenation.


I agree that the C version is slightly more readable than the Lisp version, but this slight improvement comes to a big price which is the difficulty to programmatically parse and generate C code.


I think parens would be less of a problem if the indentation conventions were easier on the eyes. Sometimes you need a ruler to figure out those 1-space indentations.


I think lisp's problems started when it wasn't designed for actual use. It is elegant, functional, and amazingly regular. Usability wasn't a design goal.

If you look at languages in wide use, they're all procedural - everything from ASM and basic to ruby and python. Similarly, all the pseudocode I've seen has been procedural. If you ask a lay person to come up with a way of making 20 pb&j sandwiches, you're going to get a list of steps for each sandwich.

I'd argue that functional thinking is not natural, it's learned. Prefix notation is not as natural as infix notation. It's not just Algol, it's likely a reflection of natural language, and the forces that shaped natural language.

My prediction is that lisp is going to become more popular in one of two scenarios. The first is that parallel processing becomes more necessary, and because functional programming is easier to parallelize, people will learn it.

The second is that lisp is redesigned with usability in mind. The fact that it's not bad as long as you use emacs+SLIME is not a ringing endorsement. One of the advantages of, say, ruby is that you don't need an advanced editor to be productive. The design goal was to make a language that's nice to use, so it is nice to use.

If lisp were redesigned with usability in mind, I could see it being more popular. However, the people who like and use lisp enough to try to redesign it likely aren't bothered too much by the syntax, so it's not something they try to fix.


But Lisp is not a functional language. You can do procedural programming with it, moreover I think that nowadays the usual Lisp program is either procedural, OOP or a mix.

You really don't need to rely on Emacs+Slime, an editor with a good support for indentation will suffice. IIRC, pg codes in Vi (or Vim). The idea is that, like in C/C++, good indentation is enough to determine the nesting so you don't need to count parentheses or curly brackets.

The last difference, infix vs prefix, I can see that we are more used to infix notation. But we can't forget that the Algol derived languages use a mix of them (operators and functions are different!) while Lisp always use prefix notation, and incidentally put the parentheses surrounding all the expression (and being the first element the function name).


Good points. Perhaps the problem is also how lisp is taught? When I learned lisp, the procedural aspects of lisp were shown, but with the understanding that they were bad, and not to be used. Probably because it was shown as a functional language, and was meant to be used as such. Similar to how, say, c++'s functional programming capabilities aren't really taught.

If the indentation is sufficient, why not just remove the parens completely and rely on indentation?

Lisp does have an advantage in being regular, which might not be worth giving up, and Algol descendants are kind of a mishmash, to be fair. To be honest, I'm not enough of a lisp user to be able to know what a good solution is for this one. Maybe changing some function names would be useful.. for example 'add' is better than '+', which I read as 'plus'. Repurposing infix symbols as prefix symbols seems problematic.

If I were redesigning the language, one of the more important things I'd change is the keywords and builtins. For example, head and tail seem a lot better to me than car and cdr. A lot of lisp's keywords and functions come from math, and I'd probably change them to reflect mainstream algorithm design instead.


Well, to some extent there exists a unparenthesized version of Lisp: Logo :)

http://en.wikipedia.org/wiki/Logo_%28programming_language%29

And for the names used for the keywords and builtins, I agree with you.

About the +/add, I really don't see too much problem.


"If you look at languages in wide use, they're all procedural"

SQL?


I think your hatred for Perl is unfair. Yes, eval and strings is a terrible substitute for macros. Only an idiot would pretend otherwise. But 90% of the time, the other tools we do have available (first-class functions, everything internal being dynamic) lets us get by.

For example, a few months ago I was working on a blog post about how nice it would be if Perl had macros. I had a perfect example picked out, but I decieded to try an implement it without macros. It ended up working perfectly.

The docs explain the original problem, and my solution: http://search.cpan.org/~jrockway/Context-Preserve-0.01/lib/C...

Also, if you read your friendly local CLOS implementation, you will notice a lot of macros. Moose is a CLOS-alike for Perl, and it obviously didn't need macros. (Actually, I would like to port Moose to Lisp, 'cause it's much more sugary than CLOS, and it has traits. Mixins FTL.)

Anyway, Perl is nearly as good as lisp, so be careful when choosing your examples. (Incidentally, I am currently working on a lisp variant that compiles to Perl, because I like lisp syntax better than Perl syntax, but I like Perl's CPAN and regex engine.)


Oh, I'm sorry. I didn't want to mean that Perl is a bad language at all. I was trying to explain my experience with my fellow coworkers, who happened to love perl.

Actually, for the last 3 years I've been using it at work, including first-class functions (and Catalyst ;)), and I've it enjoyed quite a lot. I've some minor issues with it's syntax (for my taste too much sigils, but I understand why they are there).

What I meant to be masochistic is trying to use eval with strings to emulate macros, and even trying to defend it.

On the other side, the CPAN is amazing and I find difficult to live without it!

BTW, I hope you'll publish here the Lisp to Perl compiler! Lisp + CPAN sound really awesome and productive!


The source repository is here:

http://git.jrock.us/?p=Perlisp.git;a=summary

It's pretty useless right now though :) I started rewriting the reader, but didn't like any of the parsing modules on CPAN... so it's sitting at that stage right now. When I get some tuits, I'll probably just port over SBCL's reader, or perhaps emacs'.


Whoever had downmoded my comment: please, can you explain me the rationale for such action? So I never again piss you off with, what seems to me, an in-topic comment.

Thank you very much.


You probably stepped on the toes of someone who likes Perl and Windows. Particularly, you mentioned Mac OS which some people have an inexplicable antithesis towards. You're playing with fire, my friend ;) Downmods are the least of your worries now.


I can explain the antithesis:

1. They fear the unknown; 2. They fear fanatics.

Mac OS is The Unknown to them + Mac users tend to be pretty rabid fanatics = Let's bash them!


I think I know what you mean, being a Perl programmer and Mac user myself :)


One problem I have with Lisp is that there are a lot more fanatics than in other languages' communities. They are easily upset by even the mention of other languages and seem to see Lisp as the ultimate language with no room for any others. This produces blog and newsgroup posts extolling the virtues of Lisp and telling everyone who doesn't love Lisp that there is something wrong with them. People who say they know Lisp and still choose to use other languages are looked at as particularly defective. It's not just that they don't want to use other languages, it's more like they don't want other languages to exist. They spend at least as much time trying to make other languages look bad as they do trying to make Lisp look good.

I know this doesn't describe everyone who uses Lisp, but these fanatics are very vocal and the reason I stopped reading comp.lang.lisp and decided to try other languages. I am not against being excited about a language and trying to convince other people to use it. I love Haskell and tell people about how wonderful it is all the time, but I understand that they have reasons for using the tools they do. I am more interested in the theory of a language than in how good it is at solving some practical problem, but some people don't care at all about theory and just want to get things done. I don't understand why, but I accept it. I think that the most important thing the Lisp community can do to be more popular is to be friendly to newcomers and those who disagree about Lisp's superiority. There are some very helpful and friendly people, they just need to be more vocal and drown out those who are not.


My hypothesis has nothing to do with Lisp itself, per-se:

People simply stopped paying attention because of the failed promises of AI - and in a nod to the parent, some of the talent base probably shifted to other technologies, and Lisp was left without an ecosystem. It's starting to build up again.

Beautiful case-in-point: I went to a developer's conference yesterday put on by one of the big software vendors that my company buys from. I was carrying a copy of PG's excellent On Lisp to read during dead times. Having that book out in the open started several conversations with people I wouldn't have met, incidentally. Each of those conversations went something like this:

He: Lisp! Who uses that anymore? Didn't that die out a long time ago?

Me: Well, it was big during the 80's because of AI, but it's in the middle of a comeback that started sometime around 2000.

He: really!? I thought it was a dead language. Who uses it?

Me: There are a few companies out there that use it. There are even commercial offerings of Lisp tools. There's Franz, Inc. which sells AllegroCL, for example. There are two or three others I can think of off the top of my head. There are some open source projects that are very active, too.

He: but it's just a scripting language, right?

Me: No, it's really a general-purpose programming language that can be used for any number of things. It's currently being used for writing web applications, among other things, but it could be used to write any kind of software, really. Whether or not it is interpreted or compiled is dependent on the implementation and how you want to use it.


I think there are various reasons why Lisp has the level of popularity that it does (I won't say it is categorically unpopular). Unfamiliar syntax, real or perceived attitudes of the community, number of primitives, the divergent nature of the language, industrial history, industrial support, editor support, academic support, perceived performance, programming trends and other things have all affected the adoption of Lisp.

I'm confident, given time, that all these concerns will be addressed and the rate of adoption will continue to increase. That's why I think now is a good time to get involved if you aren't already: be ahead of the curve.


That seems like a good list of reasons. I would add perhaps the sheer volume of code on the net in other more popular languages, compared to Lisp.

I disagree with your conclusion that Lisp is somehow going to take off though. After being around for what seems like forever, its still a (and I will say it) categorically unpopular language. If I were a betting man, I'd bet on JavaScript.


I suppose I wouldn't bet against you.


I wrote about this a while back (http://www.pchristensen.com/blog/articles/lisp-the-golden-ag...)

Basically, there are two Lisp eras - the big AI era and the new pg/Open Source era. The recent renewal of interest in programming languages, as well as publicity by writers like pg, Yegge, Raganwald, etc, have led more people to become interested in Lisp, but it's on an informal, person-by-person basis. Some of these people (like me) have chosen to stick with it and new Lisp communities have formed by gradual accretion. But this process has started later and been more distributed than other languages (PHP, Python, Ruby, Perl).

In researching things and trying to figure out Lisp setup problems, I've noticed that there are fewer technical "How do I ..." questions now than a couple years ago but more usage questions. The technical problems are being solved and now more stuff "just works" - not completely but much more than in the recent past. I've noticed this for CLisp, SBCL, ASDF, and other tools. There's more documentation, more tutorials, etc. Now there are a couple young web frameworks (UCW, Weblocks), generally accepted libraries for common tasks, etc. Lisp is growing in popularity and is maybe 1-2 years away from being "tasteable" - ie you can try it out without a big hassle and a research project. I think that will be an inflection point, after which growth will come much faster.


"Unpopular" needs to be further qualified. Rather than ask, "Why is Lisp unpopular?", ask, "Why is Lisp unpopular for X?". Lisp is actually reasonably popular for some things, and it's those things to which existing dialects are best adapted. Being well adapted to Y often means not being especially well adapted to X, even if it's for subtle reasons.

It's like asking, "Why are Arctic foxes (Alopex lagopus) not very popular on the African savanna?" Because they didn't evolve in response to the demands of that environment. The bat-eared fox, Otocyon megalotis, did.

The differences between Alopex and Otocyon aren't so massive that they seem like totally different kinds of animals. But it would be tricky to predict exactly what they should be, and develop the right kind of fox in a lab. Some adaptations are obvious (thick fur), other less so (specialized circulatory system). I suspect the same is true for programming languages. It's tricky to determine exactly what makes Scheme better suited to the classroom than to developing general applications. Perhaps the best way to solve the problem is the way PG is doing it with Arc, which is to take a very young, malleable dialect of Lisp and plunk it down in the environment to which you want it to become well adapted, and then continue development in response to the demands you encounter.


OK, I'll bite;

    (mapcar #'why-is-lisp-unpopular? 
            (list system-administration
                  enterprise-applications
                  web-applications
                  embedded-software
                  mobile-phone-apps
                  blogging-software
                  compiler-writing
                  configuration-files))
;)


I don't know the specific reasons why CL and Scheme are not very popular for those things; my general answer is that they're adapted to other things (like classrooms and smart students' brains). My point is that it's very hard to figure out, and so the best bet is probably to let a new dialect evolve in response to the demands of one or some of those environments.


Actually, it might be more accurate to think of programming languages as genes rather than animals, or at least, to include programmers as an important part of the phenotype. A complication in the adaptation view is that programmers can stop certain "genes" from being expressed. So a language like Lisp might not catch on in certain environments because the features that would make it powerful, like metaprogramming, are never actually used.


In my opinion, if you don't learn Lisp at school, picking it up later in your spare time is going to be quite a task.

After reading and doing the exercises from the Touretzky book and The Little Schemer, plus some articles here and there on the 'Net my experience was:

1) Lisps have their annoyances like all languages. Lots of little non-intuitive keywords to memorize, the dotted pairs, quite the baroque computation model (variables having sub-variables, etc.), the FOR syntax, DO, etc. Not a deal breaker, but annoying. (Touretzky seems to fixate excessively on some of these weird traits of Lisp.)

2) If your Math chops are rusty, you are going to have to work twice as hard. Mine are, and I had to. If you are fresh from taking your Computability Theory course, then recursion and infix are going to be second nature. If you haven't been doing Math for a while, it's going to be hard. Another reason to learn Lisp while in college.

3) You can learn the syntax in 2 hours, yes, but to really know Lisp you need to really understand recursion, continuations, thunks, macros, multiple dispatch, all the fine points of object orientation (to understand the CLOS) and who knows what else. Basically, you need to have a deep understanding of, well, all of CS? This is not something that can be learned on your spare time unless you have lots of it.

In my case, I am starting The Seasoned Schemer now, and taking it slowly, under no illusion that I am going to learn Lisp any time soon. I may well be that I am one of those "stupid programmers" that are unsuited for Lisp. Oh well...


OT: According to wikipedia, the PAVE PAWS phased array operates in the 420-450 MHz UHF range, with a corresponding wavelength of 66cm; if the error margin is really 0.1*lambda, the system is built to a precision of 6cm, significantly easier to meet than 0.1mm.

Also, it would be easy to compensate any error by controlling the phase of the fed signals.

See http://en.wikipedia.org/wiki/Ballistic_Missile_Early_Warning...


Thanks!

I was thinking of that very link when I wrote this. Dunno know why I didn't look more carefully for it.

The number L = lambda/10 is actually part of the standard definition for a microwave system. Any circuit that has a physical dimension greater than L is too "big" to use standard circuit analysis, because the voltage and current values you measure at different points will be different due to propagation delays, even when those points are connected by a wire with no reactance and negligible resistance at lower frequencies.

The world's biggest microwave system (biggest machine period, actually) is the US power grid, even though it oscillates at a measley 60 Hz. Lambda is about (.6c / 60 Hz) = 1800 miles.

This is proof that, electrically as in so many other things, the Midwest is 180 degrees out of phase with the rest of the country.


I don't understand. Is the claim that Lisp is less popular now because years ago large numbers of programmers working on government-sponsored projects had to use it, but now proportionately fewer programmers work on government-sponsored projects?


Yes, I believe the claim is that when huge government projects like this were completed, all the Lisp programmers left, and had to retrain to find non-Lisp jobs. This sounds similar to the "AI collapsed and took Lisp with it" argument.


There may be something to that argument, but another probably bigger factor is that there are now other dynamic languages. In 1990 there was no Python or Ruby. If you wanted to program in that kind of high level style, the only options were Lisp and Smalltalk.


All the programmers I know who wrote lisp in the 80s, now own software companies or are in management. That may explain some of it.


The problem with the AI winter argument is that it's half an answer that satisfies you enough to stop asking questions. If you invoke the collapse of commercial AI to explain Lisp's current state of disfavor, you then have to explain why the collapse of the dotcom bubble didn't have a similar effect on Perl or Java.

Anthropomorphically, you could say that Perl retreated to its base of support among sysadmins, (Common) Lisp took shelter in big-think research organizations, and Java reinvented itself as New Cobol.

But that doesn't really satisfy either, because it doesn't tell us why nobody tried to similarly re-purpose Lisp. That's especially odd, in fact, because now Everybody Knows that a lisp core is really simple, and Lisp was still on the radar in academia, where people can be rewarded for re-implementing old ideas.

The problem with the AI winter argument is that it's half an answer that satisfies you enough to stop asking questions. If you invoke the collapse of commercial AI to explain Lisp's current state of disfavor, you then have to explain why the collapse of the dotcom bubble didn't have a similar effect on Perl or Java.

Anthropomorphically, you could say that Perl retreated to its base of support among sysadmins, (Common) Lisp took shelter in big-think research organizations, and Java reinvented itself as New Cobol.

But that doesn't really satisfy either, because it doesn't tell us why nobody tried to similarly re-purpose Lisp when DARPA et al lost funding. That's especially odd since Everybody Knows that a lisp core is really simple, and Lisp was still on the radar in academia, where people (called grad students) can be rewarded for re-implementing old ideas.

I think there is a pretty good explanation (with evidence, even!), but it's not a satisfying one in this community: the end of the Cold War caused a major reallocation of scarce resources away from expensive one-off projects for the government, and towards mass-market stuff for consumers and businesses (see below), and Lisp suffered from some initial disadvantages in the new environment:

* Limited availability on PC platforms

* Windows Apps were sexier. The internet was sexier.

* Glut of new CS students while the bubble was inflating were mostly taught in Java. (Many of them are sensitive about arguments that lend further support to the idea that their education has turned out to be less valuable than they thought at the time. That may just be my bias showing, though. I think that Math should be relabeled CS, and that "programming" is for autodidacts and trade-schoolers).

* CS is a new field, and industry tends to dumb down research in new fields for a generation or so after disruptive technical innovations. That way, they get enough of the low hanging fruit that it further disruptive innovation is unlikely and their capital is safer. See below, particularly the chapter on the relationship between GE and MIT before WWII.

Social arguments are hard to make here though, because so many people are so hooked on the "weird and scary and only for elitist pricks" narrative. That's kind of ironic, because high level corporate executives, if they heard the current public stand on lisp -- "it may be really productive, but it's too foreign for most people and I don't feel like learning it" -- could probably force adoption from above, especially now that cost-cutting is the order of the day.

http://www.amazon.com/Leonardo-Internet-Technology-Culture-R...


> If you invoke the collapse of commercial AI to explain Lisp's current state of disfavor, you then have to explain why the collapse of the dotcom bubble didn't have a similar effect on Perl or Java.

I'm not saying AI Winter is the definitive explanation, but AI collapsed so thoroughly that there were barely any AI companies still around. Even at the worst of the dotcom collapse, use of the web was still growing among the general populace, and the principle leaders like Yahoo, Amazon, and Ebay are still around.

Your argument about Lisp losing interest because of the PC boom is really interesting, and I think it has some merit. Still, doesn't it beg the question of why people didn't just port Lisp to the PC and keep going? That's what happened with Unix -> Linux.


Or that there are less government-sponsored AI projects?


One problem I had was getting it installed. If that was easy, I would have gotten started sooner.


The biggest barrier in my opinion is what most people think is "weird" syntax or the more precisely the absence of it. People are just not used to it. Add to the that the lack of popular libraries for some modern niceties and it really adds to it being unpopular.


I don't understand the syntax complaint: it is simple to learn, and with few exceptions, applies in all cases.

Other language syntax (C or Erlang, e.g.) is arbitrary and tedious by comparison.


It's not "syntax," it's human parseability. Lisp is hard to parse because humans don't think in lists. Some people are smart enough or naturally able to overcome that obstacle, but most aren't.


What this fails to take into account is that most programmers learn a language with C like syntax first. Either C, C++, or Java (you could even argue JavaScript). Like it or not, people have a tendency to avoid change when possible.


Another problem that LISP had was the era of it's widest use -- an era when compilers and most programming tools were proprietary, very expensive, entirely closed and usually dependent on one or another variant hardware platform. This was also the pre-cambrian era of computer architecture where the hardware varied so much that porting was a serious issue. Nowadays, languages which have popped into existance and gained wide acceptance (perl, ruby, python, for example) were developed in an open-source world. And now, finally, lisp is catching up in that regard.


Actually, that little yellow thing looks to me like a large boom lift. Note the cage at the front, away from us. For cleaning bird droppings off the array elements?


yeah. It's the garbage collector.


There are too many disparities between a problem domain (microwave engineering) and a programming technique (Lisp) to compare the two this way.

I think it more reasonable to compare the popularity of problem domains (e.g microwave engineering vs robotics) or techniques (e.g. S-Parameter Simulation and Lisp).



Lisp is not unpopular (look at all the programming newbies around!), it's simply much misunderstood (like many of the high-end solutions, because its simple looking base is all but trivial...)


What's the misunderstanding exactly?


People like stuff that looks familiar but harder. To someone raised on C-style code, Lisp looks foreign and simpler, two big strikes.


I (((((((have)))))never)been))able))(((to))understand)why((((((it is)))unpopular))))




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: