In most anything written by a fan of newLisp, you'll find something like the following (taken from the linked article):
> Today, unfortunately, whenever someone mentions newLISP on an online forum frequented by adherents of another LISP, an almost clan-like flame war will erupt. An adherent of one particular dialect, usually someone who has never used newLISP, will shout obscenities such as, "newLISP goes back on just about every advancement made by LISPs", or "dynamic scope is evil!"
Maybe there's something to newLisp, but any time I see a group with that flavor of persecution complex ("All the criticism is wrong because they just don't understand!") I take pause.
Maybe there's something to newLisp, but any time I see a group with that flavor of persecution complex ("All the criticism is wrong because they just don't understand!") I take pause.
To be honest, that could describe almost the entire (vocal) Lisp community these days, and I say this as a quite embarrassingly commited Lisp fan and user.
Anyway, I wouldn't say that dynamic scope is evil, but to most Lisp users is does just feel a little backwards. Un-natural, if you will. That's just one of the reasons why everyone secretly (or not-so-secretly) longs for the never-coming day when emacs lisp can just 'go away'.
Lispers can bicker over technicalities and matters of taste, but any programmer of a Lisp dialect would prefer to move to another, instead of using something else like Java or C++. At least I would. Even small Scheme-like dialects that are embedded in big libraries and industrial applications (stuff like Lush[1]) is very palatable to any Lisper (Arc, Goo and other curiosities are also interesting.)
Not so with NewLisp. It's fundamentally broken in ways that would be make me look forward to ASP.NET over it. To give you an example; to calculate the value of Pi, they suggests one fire an external process and run the Unix bc(1) command to do that :-P http://www.newlisp.org/index.cgi?page=Code_Snippets
To someone who wants to use newLISP for calculating Pi to the nth digit, it might appear "fundamentally broken." Of course I don't know why on earth you would want to do that in the first place, using a program/language specialized for such a task would be a far brighter idea, as such, the doc you refers to correctly suggests using the 'bc' program.
However, for someone who wants a Lisp for rapid web development, newLISP is a dream come true:
I guess you can view newLISP in at least two different ways: a naive implementation of it, or a reboot of it, branching from a much older LISP than most today do.
I think of the new LISPs out there, Clojure has its act together much more than newLISP - it's quite sophisticated in a lot of ways, it's making rapid progress, and now has a relatively large development community for such a new language.
But for newLISP users: if it works, it works, and more power to you.
I think Clojure is a wonderful language, and I do not see newLISP as replacement for it. From the article:
In the same sense that you wouldn’t use JavaScript to write an iPhone app (some beg to differ), you wouldn’t use use newLISP to write an operating system, a music player like iTunes, or a web browser like Firefox. For such endeavors I recommend without hesitation Clojure, Scheme, C, Objective-C, etc. In other words, languages geared for solving complex, low-level problems, as quickly as possible.
Maybe there's something to newLisp, but any time I see a group with that flavor of persecution complex ("All the criticism is wrong because they just don't understand!") I take pause.
I haven't read (in your article or elsewhere) sufficient defense for newLisp's choice of
o Lazy evaluation ("fexprs") over Lisp macros
o "One-Reference-Only" memory management over garbage collection
o Dynamic scope over lexical scope
But you've found it "the easiest [Lisp] to setup, deploy, and develop for"? Great! newLisp has fantastic documentation and a helpful community? Perfect! I full-heartedly agree with zephjc: "if it works, it works, and more power to you."
* Fexprs are easier to write and more expressive than macros. They can be mapped, applied, wrapped around other functions, fexprs and built in primitives, in the case of Newlisp, even mutated during runtime. Fexprs are lot of fun. And the price is - dynamic scope. More about dynamic scope later.
* ORO is something in between "manual" memory management like in assembler in C and real GC. It is "more automatized" than former, and less automatized than later. Basically, i agree with you, GC has some important advantages, but ORO is - adequate - and I'd say, not completely without advantages in practice as well. (Theoretically, GC is not excluded in the form of libraries. In past there was little interest for that, but Greg's recent "Objective Newlisp" library provides simple, reference counting-based GC algorithm.)
* Dynamic scope - it gives more expressive power: the functions in dynamic scope are about equally expressive as macros, and they are the first class values. The problem with dynamic scope is - accidental name clashes, or "overshadowing." That is where static scope helps and I understand it is reasonable choice for languages like Ada, Eiffel and many others: safety over expressiveness. However, it was not original design goal of Lisp, supposed to be very adventurous language - and it is still visible: CL, Scheme and Clojure programmers have to face exactly the same problem if they write macros. The solutions: namespaces, gensyms and "hygiene" work on the same or similar way for Newlisp dynamic scope. So, it is not really consistent to complain against dynamic scope as unsafe, and to believe that macros are (or can be) "safe enough."
the price of FEXPRs is not 'dynamic scope', the price is that compilation gets hopeless. No matter how fast your interpreter is, for most practical purposes a native code compiler will a) do some checks at compile time and b) the compiled code runs orders of magnitude faster.
Dynamic scope is not about 'power', initially it was an implementation error. Then it stayed in Lisp for a while. Unfortunately most Lisp compilers implemented lexical scope (also because it runs faster), while interpreters stayed with dynamic scope. With Scheme the whole thing got cleared up and interpreters and compilers of the same language now were using lexical scope. Dynamic scope was now an extension (so-called 'fluid lets'). The developers of Common Lisp accepted that this was the right thing (like anyone else in Functional Programming and most Lisps like ISLisp, EuLisp, Dylan, ...) and adopted lexical binding, also requiring that the compiler and interpreter behave the same. Common Lisp additionally allowed to use dynamic binding when declared, because in many places it is a useful feature - just not be default.
The so-called 'Newlisp', from 1991, invented some of their own ideas, went back to old ideas and reused names (like 'macro' for 'fexpr') in confusing ways. The thing where it gets stupid is where people believe that it is 'better designed' (it is as ugly as most Lisps) or that it is 'faster' (for constructed benchmarks).
The problem with dynamic scope is not name clashes. The problem is that I don't know what bindings MY code will use, because somebody else might have rebound variables or functions in his code. That's the purpose of dynamic binding: injecting new bindings into old code. Additionally it is bad for compiled code, etc.
Newlisp is full of potentially scope problems and writing anything larger than a small script with FEXPRs is a maintenance nightmare. There has been lots of code being written with FEXPRs in the past, but today it is gone. Much of the dynamism that it provides is just not needed and simply gets in the way. One can generate artificial requirements where FEXPRs would be useful, in practice they are a relict of the past. Scheme and Common Lisp and all the other lexical scoped dialects are plenty expressive.
Hm. You make this sound attractive; I've been messing about with PLT Scheme for a few months and will check out newLisp to see how they differ.
That said, I need to disagree with you on one point:
> I realized that it was syntax that was at the root of most programming errors.
No. You're using the word wrong. Syntax is the structure of a language--e.g., curly braces vs whitespace vs parens vs etc to set scope. Syntax is trivial and can be caught by the compiler / interpreter.
The root of most programming errors is a logic fault--i.e., the programmer is either solving the right problem incorrectly (didn't think it through or didn't understand it) or is solving the wrong problem (incorrect / out of date / misunderstood requirements).
> Up to that point in time my knowledge of LISP consisted of the usual hearsay and mantra of those unfamiliar with the language: "People who like it are crazy zealots who think they’re superior to everyone!"
So I used it... and now I'm crazy for it and I think I'm superior to everyone!
Good article, I've liked newLISP since I discovered it a few days ago but wasn't sure about it after a few responses to my comment in the newLISP webframework thread...
I think the syntax cards were pretty bogus anyways. If you really wanted to compare syntax, you'd do better to compare grammars (e.g., Scheme's: http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z...) than to decide what's important enough to include in some gif. It's hand-wavy at best (cf. the explanations of why things like eql were on the card) and straw-man at worst.
I'm not saying that newlisp is to be avoided, but:
Lisp's power is not in its lack of 'syntax', but in the specific way that the code is structured. You can write code that writes code. The regular structure of the code helps this, not the specific syntaxes.
The Fexpr vs. macro comparison is bogus.
In common lisp (or scheme or Clojure) you will normally not be macro-expanding the macro more than once (when you compile it).
You will notice that compiled versions (cl or scheme) of these tests trounce newLISP soundly. Eval is to be avoided because most of the time there are are a lot of more structured ways to do exactly the same thing. Evaluating arbitrary code is very clever and can lead to a very big headache.
I don't know about the rest other than that 'we have fork' is probably not going to buy you much from clojure folks.
I think a better argument for newLisp would have been:
1.) It is lispier than python or C.
2.) It is simple and good for scripting, and therefore good for lisp newbies (you don't have to get your head around macros or the intricacies of eql eq equal, equalp and =).
I think that the CL community appears so abrasive because the language is so complex (this has nothing to do with syntax, but simply the size and complexity of the spec). You have to make serious effort to understand/know it before you can even begin to ask reasonable questions about it.
Even people who have been programming CL for years can get things wrong answering from the top of their head. It is not poorly documented, it is just huge and in some respects complex.
If you use eval in your program, the program will macroexpand each time it evaluates code containing macros. As even standard operators like setf are macros in CL, it will practically macroexpand each time it uses eval.
So, macros really slow down all or practically all programs containing eval. One can avoid evals, but from my point of view that means avoiding the most powerful, the most fun code=data feature in Lisp. It doesn't look good to me. So, why should one avoid eval?
"Eval is to be avoided because most of the time there are are a lot of more structured ways to do exactly the same thing." I miss the point here.
EVAL is most of the time not needed. Why add a complication. Write simpler code. There is little need to call EVAL over and over.
Using EVAL in Newlisp and any other Lisp slows down the execution. Any compiled Lisp code will run MUCH faster than any evaluated code in Newlisp (with Fexprs, or without).
Who cares if the Newlisp interpreter is faster than the CLISP interpreter, if Newlisp is anyway many times slower than compiled code in any other Lisp? Why should I burn useless cycles with Newlisp, when a simple compiler will speed up things much more?
Right, that is the reason that there is such a huge difference between the runtime for eval between the languages. Macros do slow down eval if you are doing macro-expansion. They are complicated enough that normally you do not write (or expect) them to expand quickly. This is obvious because macros and fexprs are much different. Overall, macros can speed up your program because you are allowed to do certain calculations ahead of time, if you are clever about it.
I am not saying that you should avoid eval completely, I am just saying that it shouldn't be your only tool. And when you do use it, if you put it in a tight loop, you are probably doing something wrong.
The main alternative that comes to mind for me is function passing. There is a lot that you can do with function passing that overlaps with eval, except the function passing version will be safer and faster.
We agree that is one of the most powerful, most fun aspects of using lisp (the power is what makes it fun!). I also hope we agree that the frequency of its use should be the inverse of its power.
Macros only really slow down things when you for some unknown reason call EVAL on source code many times.
When code is running using EVAL and an interpreter, macros can be expanded once and then the expanded code is used next time. For example in a loop it is not necessary to expand a macro each time it is used.
If one needs to EVAL new code all the time, then one may ask oneself if that is really necessary. For example if one is doing some kind of genetic programming. Otherwise it is basically a programmer error to do so.
I haven't seen any convincing example where it is really needed. My Symbolics Lisp Machine (which comes out of the MacLisp tradition, which had FEXPRs) got only limited FEXPRs (can't be compiled, can only be used at the top-level). Still the developers who had extensive FEXPR experience with Maclisp, were able to write the compiler, the garbage collector, the graphics driver, the window system, the network system, compilers and interpreters for various languages, text editors, mail clients, mail servers, and much more without using FEXPRs.
http://news.ycombinator.com/item?id=788157
In most anything written by a fan of newLisp, you'll find something like the following (taken from the linked article):
> Today, unfortunately, whenever someone mentions newLISP on an online forum frequented by adherents of another LISP, an almost clan-like flame war will erupt. An adherent of one particular dialect, usually someone who has never used newLISP, will shout obscenities such as, "newLISP goes back on just about every advancement made by LISPs", or "dynamic scope is evil!"
Maybe there's something to newLisp, but any time I see a group with that flavor of persecution complex ("All the criticism is wrong because they just don't understand!") I take pause.