Hacker News new | past | comments | ask | show | jobs | submit login
μLithp - a Lisp in 27 lines of Ruby (fogus.github.com)
137 points by Peteris on Jan 4, 2013 | hide | past | favorite | 98 comments



Before this devolves into another farcical, Lisp arm-chair punditry: at 9 operators it's pretty much a kernel untyped lambda-calculus with pairs. And just that.

The implementation is as trivial as the concept is profound. The discussion should be about the lambda-calculus, not Lisp, which is a far more complex beast. And much less about Lisp dialects, Greenspun's, or the usual BS topics that invariably appear on any L-word thread.


The fun part that distinguishes an early Lisp from the pure lambda calculus is the mixing of strict evaluation and call-by-need. I had to hack that a bit by making a distinction between Procs created with proc vs those with lambda and by tearing apart function bodies. I forget how the original Lisp made the distinction. It looks like I need to re-re-re-re-re-read the Lisp 1.5 manual.


I thought they just had special rules for expressions with a car in {set!, lambda, if, quote, ...} such that arguments to those are not evaluated. From there you can construct whatever else. I might be mistaken, I'm not clear on the history of it.

You know what you might like, if you haven't seen it yet, is the Kernel programming language, a Scheme where everything, including macros, is first-class.


If I had a little more time I would have written the uLithp eval in uLithp. The stark simplicity of that never fails to take my breath away.


Even then, it should inspire people to study the fundamentals, not go into big-language punditry :-D


Well, I can't control everything. :-)


That was a wishful "should"; a naive statement made in the hopes of it becoming self-fulfilling, an impersonal prayer to the universe. Not an imperative directed at you.

Besides, we have better uses for you ;-)


But will it be dynamically or statically scoped? :)


plug

Once upon a time I wrote a lisp to lua compiler in 100 lines. https://github.com/meric/l2l/commit/a530f0133e002c3981937d21...

It's now over a thousand. https://github.com/meric/l2l/


I think this is a neat project and a nice demonstration of how Lisp-influenced Ruby is.

However, whenever people post "Lisp in Ruby" stories, I always hope that it'll be a "Clojure in Ruby" implementation. I am surprised no-one has done it yet.


I always hope that it'll be a "Clojure in Ruby" implementation. I am surprised no-one has done it yet.

They have: http://rouge.io/ https://github.com/rouge-lang/rouge


Apart from using the JVM, what is the advantage of Clojure over other Lisp dialects?


I am mostly familiar with Clojure as opposed to CL (have read a bit of Practical Common Lisp) or Scheme (have read a decent chunk of SICP), but there are some pieces here and there.

Clojure is immutable by default, which is a pretty big difference AIUI. It also has a pretty strong emphasis on concurrency, including a bunch of primitives for such. It uses STM.

A bunch of data structures are first class, like vectors, maps, and sets. There are literals for each of them, and you can use maps and sets as functions. You can also use keywords as functions against maps to e.g. retrieve the :name value from a list of maps:

    user=> (map :name [{:name "foo"} {:name "bar"}])
    ("foo" "bar")
Lambda literals are nice:

    user=> (map #(* 2 (inc %)) [1 2 3])
    (4 6 8)
They can take multiple args (e.g. %1 %2) instead of just %.

Threading macros are somewhat closer to Haskell's syntactically straightforward function composition:

    user=> (->> [1 2 3] ;; this'll be the last arg passed to each fn below
    #_=>   (map #(* 2 %))
    #_=>   (map inc))
    (3 5 7)
I'm sure there's more. Maybe a more experienced Lisper can chime in.


Sad (for me) difference: tail call optimization, done automatically in Schemes, unavailable in Clojure. I know of recur and trampoline, but recur does not (I think?) work with mutual recursion and if I wanted to use trampoline, I'd be coding in JavaScript :)

Personally I dislike Clojure's syntax - Scheme's feels more pure, cleaner. It's a bit like PERL vs. Python, a matter of taste largely.


I don't believe recur works with mutual recursion. I wish I understood mutual recursion better, frankly. Got any pointers about how one might use it in practice? In general my Lisp-fu could use a lot of improvement.

I would call it more like Ruby vs. Python — c'mon, perl? that's just mean — but yes, we're quibbling. :)


>Got any pointers about how one might use it in practice?

The meta-circular evaluator is a prime example of mutual recursion. It consists of two functions, eval and apply, which call each other in a circular fashion until a result is reached.


Clojure is immutable by default what does this sentence mean? How Scheme is less immutable by default?

Which data-structures are not first class "objects" in Scheme?


Again, I don't have much experience with Scheme; I've read about half of SICP. So that's a huge caveat here.

It may have been misleading to say that Clojure was immutable by default. It is immutable; it does not have set! or the moral equivalent. You can use concurrency primitives to update in place (sort of? AFAICT you're mutating the var itself, not the data it contains) but each primitive comes with its own semantics.

As far as first-class is concerned: in Clojure, maps, sets, and vectors are functions. They're functions of their data as well as being data, and there are literal representations for each.

Examples:

        user=> ({:foo 1} :foo) ;; maps are functions
	1
        user=> (:foo {:foo 1}) ;; keywords are functions, too
	1
	user=> (filter :name [{:name "foo"} {:surname "baz"}]) ;; filter by key
	({:name "foo"})
	user=> ([\a \b \c] 2) ;; access vector by index
	\c
	user=> (#{1 2 3} 4) ;; test set membership
	nil
	user=> user=> (filter #{2 4 6} (range 1 10)) ;; filter by set membership
	(2 4 6)
To what extent is any of that true of Scheme? I am reasonably certain that these data types can't operate as functions in Scheme. Glancing at such as the Racket docs, my impression is that generally you use a specific set of functions to interact with each type (e.g. hash-map-get, hash-has-key?, set-member?).


This is rather tricky subject - to understand real behavior - what is behind this or that a syntactic sugar.

In Scheme, for example, we could implement any data-structure as a closure, which accepts messages. There is no difficulty in writing such wrappers - it is just a closure which return another closure which accepts "messages", and following some protocol, returns, for example, another closures to be called for a certain action (generators, iterators, etc.)

We could teach the read function to recognize any kind of wired syntax we wish, and constructing appropriate data-structures with type-tags attached to them. But it will become a mess.

The real data-structures, however, its representation is very different thing. In Clojure, I suppose, it is based on built-in Java types and generic interfaces, such as Iterable or whatever it is. So, they are ordinary Java objects, without any magic in it.

In Scheme or CL it depends on the implementation, the choices made by developers. So, for example, Gambit-C and MIT Scheme are quite different in how they implement hash-tables or vectors. My guess is that, say Alegro CL and CMUCL are also very different, yet they all conform to some standards (CLtL2, ANSI).

Having very different implementations with different set of compromises is a strength.

So, in my opinion, there is absolutely nothing special in this syntactic constructions, moreover, it is not that difficult to implement them using closures and macroses. In arc.arc you could see how strings and tables were implemented.

Another issue is, should we add all this wired syntax to what we call Lisp? In my opinion doing this ruins Lisp and the result is some very different in a look-and-feel language. Calling it Lisp is, well, confusing, at least to me. Arc is a Lisp, no doubt. CL and Scheme are Lisps, for sure. Clojure is Java with lisp-like syntax, if you wish.)

The old rules says that there must be very heavy reasons to add any new symbol or a keyword into a language, and that the same things shall look the same, and different - differently. For me, personally, Arc is a proper approach, while Clojure is, well, a mess.)


>So, in my opinion, there is absolutely nothing special in this syntactic constructions, moreover, it is not that difficult to implement them using closures and macroses. In arc.arc you could see how strings and tables were implemented.

You're right, there is nothing special about what Clojure does; everything could be implemented in Scheme just fine. The real advantage comes from the fact that these things are there "out of the box" so that libraries are written to use them.

There is a lot of inertia when it comes to libraries in a language. In some sense, you only get one chance to get it "right" before everything becomes interdependent and you can no longer change anything. Clojure's advantage is simply that it presented a chance to start over and get things right from the beginning.


Right, yes: what ships with the language shapes the way the language is used, and has wider impacts beyond what you could theoretically or even practically do with it.

A dumb example would be a Option/Maybe type in Java. You could write a good approximation. But none of the standard libraries would use it, let alone any user supplied libs. Lisps are different in some very important ways, but since this is in part a meatspace phenomenon, it's still a challenge. Somebody went so far as to call it The Lisp Curse[0].

Anyway, syntax is a matter of taste, though I think people focus too much on it one way or the other— people don't like s-expressions b/c parens, people don't like Clojure b/c brackets/braces/whatever.

So while I agree that Clojure syntax isn't a huge game-changer, it can and does improve the quality of life for some people, myself included. Built-in literals are a big deal. And there is a lot to be said for being able to use these as primitives on day 1 of learning Clojure, esp. as your first Lisp.

[0]: http://www.winestockwebdesign.com/Essays/Lisp_Curse.html.


In Racket, any custom data type can be turned into a function by using the `prop:procedure` structure type property. It's usually not done with things like maps and sets because this kind of "punning" is not idiomatic. Being a function is not actually a necessary condition for being "first-class" though.


I agree that it's not necessary. But it's a very strong signal, and thus illustrative.

AFAICT there is not a well-understood definition of first-class outside of "first-class functions." If I had to nail it down, I'd probably include the notion of first-class as applied to functions, plus:

* literal representations (where applicable) * idiomatic, as demonstrated/enabled by the standard library * well-integrated/interop with other primitives

This is just off the top of my head. And sometimes this is relative to other constructs in the language, or fuzzier, like Haskell lists vs. maps vs. sets.


> AFAICT there is not a well-understood definition of first-class outside of "first-class functions."

Why not just the same notion of "first-class" as functions? In other words, that the feature is actually represented by a run-time value that can be passed around freely and stored. This is the usual definition of "first-class" that I hear most people use in the programming languages world. Examples include first class control (continuations), first class references (boxes, mutable cons cells, etc.), first class environments, first class modules (see OCaml, units, etc.), first class labels, and so on.


Scheme has set-car!, vector-set!, hash-set! etc to modify its standard containers. In clojure those containers are immutable. However, you could create equivalent immutable containers in Scheme, perhaps that's why he said "by default".


I'm not sure about objective advantages (and note that I'm familiar only with Clojure, Common List and as much Scheme as there is in SICP), but Clojure feel kind of tight to me. In CL, array, hashmap, set and list have completely different interfaces, while in Clojure the interface is not only very similar (and nice, for example vector is also method to access itself), but allow you to use the same code to access different storage types. Clojure also felt more functional than CL. Clojure also felt more comfortable in ways such as distributing code, getting libraries and things like that, it's visible that the world of development tools have moved on since the CL of 1985.


It's default usage of purely functional data structures and a rich concurrency model (STM, actors and more.)


Javascript as a possible compile target (https://github.com/clojure/clojurescript)?

Optional static typing (a la Qi) is being implemented, too.


Clojure has cleaner syntax and matches the expectations of people used to languages that assume an underlying POSIX-like operating system.


Could you, please, provide us an examples of syntax and expectations?


Why would anyone? Clojure runs on the JVM which supports native os threads, concurrent garbage collection, etc, for one thing. Different tools for different purposes.


The JVM also takes 100MB to startup a Web server. Maybe I want to use less ram?


If you want to host a webserver on an old 386 or on a credit-card sized Java smartcard, I understand your concern.

Taking 100 MB to startup may be a problem. But not exactly for a webserver.


It certainly can. I like to create a lot of small hobby sites for fun. I pay $20 for a VM with ~700MB of ram. How many JVM web apps can I fit on it vs node web apps?

Or, I run a few hundred servers at work, also on VMs. I want to create a process that needs to run on all of them in addition to what the server is actually used for. Maybe for monitoring, maybe for server management, etc. and I want it to have a REST API. Suddenly using 100MB matters to me.


Don't call it Lisp.

Lisp begins with something like this:

  (defmacro when (test &body forms)
      `(cond (,test nil ,@forms)))
Without using underlying list structure, environments, lexical scoping and reader function it is just a lisp-like syntax.

Lisp without lists at its core is nothing but another clumsy language. Look at Clojure - what a mess.


What's the problem with Clojure?:

    (defmacro when
	  "Evaluates test. If logical true, evaluates body in an implicit do."
	  {:added "1.0"}
	  [test & body]
	  (list 'if test (cons 'do body)))
Moreover, what's the problem with μLithp?

    l.eval [:label, :second, [:quote, [:lambda, [:x],   [:car, [:cdr, :x]]]]]

    l.eval [:second, [:quote, [1, 2, 3]]]
Seems list-y enough for me. I have a feeling that you didn't actually read the article.


There isn't any problem with this Clojure macro. If you would like to see problems with Clojure, take a look at keep function in core.clj.

There is a classic function:

  (define (filter fn l)
      (cond ((null? l) '())
	  ((fn (car l)) (cons (car l) (filter fn (cdr l))))
	  (else (filter fn (cdr l)))))
The point is that stuffing more data-structures into a lisp ruins it. Somehow switching to the prefix notation and adding parenthesis doesn't transform Java to Lisp.

For the second piece of code - are't semicolons and comas somehow redundant?

Moreover, what is the point of writing something this way?


Not sure what the objection to the keep function is, other than maybe the performance optimisations that chunked sequences allow. Taking away the chunked consideration gives something like:

    (let [x (f (first s))]
      (if (nil? x)
        (keep f (rest s))
        (cons x (keep f (rest s)))))
Which looks almost identical (with differences, as you seem to be giving filter rather than keep, which are different functions). The library function COULD be defined like this, it just wouldn't be as fast.


Taking away the chunked consideration...


>The point is that stuffing more data-structures into a lisp ruins it.

Could you please support this assertion?


Yeah, 27 lines. Unless you count the ~120 lines of sexpistol.

https://github.com/aarongough/sexpistol


Syntactic transformation isn't really what it's about, is it? Parsing S-expressions into some internal representation is easily the most boring component of a working Lisp.


My s-expression reader is 34 lines. https://github.com/fogus/ulithp/blob/master/src/reader.rb

A minimal Lisp (the bare minimal) does not require sexprs though.


A minimal Lisp (the bare minimal) does not require sexprs

Really? Why not? I realize that McCarthy's original spec used recursion equations, but that notation is more complicated than s-expressions.


Related: http://news.ycombinator.com/item?id=3511100

Above was HN submission from early last year, Lisp in 32 lines of Ruby, which was the original blog post of this code by Fogus.


Not sure what this proves exactly. For fun, here's a Javascript equivalent in 27, err, "lines":

    module.exports = function() {
        var atom = function(val) { return val instanceof Array ? false : true; };
        var env = {
            label:  function(sexpr, senv) { senv[sexpr[1]] = evaluate( sexpr[2] ); },
            quote:  function(sexpr, senv) { return sexpr[1]; },
            "==":   function(sexpr, senv) { return evaluate(sexpr[1], senv) == evaluate(sexpr[2], senv); },
            head:   function(sexpr, senv) { return evaluate(sexpr[1], senv)[0]; },
            tail:   function(sexpr, senv) { return evaluate(sexpr[1], senv).slice(1); },
            conc:   function(sexpr, senv) { return [evaluate(sexpr[1])].concat(evaluate(sexpr[2])); },
            "if":   function(sexpr, senv) { return evaluate(sexpr[1], senv) ? evaluate(sexpr[2], senv) : evaluate(sexpr[3], senv); },
            atom:   function(sexpr, senv) { return atom(sexpr[1]); },
            lambda: function(sexpr, senv) {
                return function(lexpr, lenv) {
                    for(var i=0; i<sexpr[1].length; ++i) lenv[sexpr[1][i]] = evaluate(lexpr[i+1], lenv);
                    return evaluate(sexpr[2], lenv);
                };
            }
        };

        var evaluate = function(sexpr, senv) {
            senv = senv || env;
            if( atom(sexpr) ) return senv[sexpr] !== undefined ? senv[sexpr] : sexpr;
            else return senv[sexpr[0]](sexpr, senv);
        };

        this.evaluate = evaluate;
    };
Example:

    var notlisp = require("./notlisp.js"); 
    var l = new notlisp();
    l.evaluate( ["label", "second", ["lambda", ["x"], ["head", ["tail", "x"]]]] );
    l.evaluate( ["second", ["quote", [1, 2, 3]] ] );


I want to learn a lisp variant. I'm okay in several scripting languages--Python, Ruby, PHP.

Any obvious choice? (If not, I imagine I'd like to learn the most widely-used)


> I imagine I'd like to learn the most widely-used

Learn yourself some Clojure for great good.


Light Table implements clojure, right?

Then I can get busy with that ahead of time (I've been sweating a Python release)!

Thank you.


> Light Table implements clojure, right?

Yep!

Happy hacking: learning some Lisp variant is a great day/month/year/lifetime in any programmer's life.


Racket is another obvious choice.


Clojure has its selling points, but if you're learning Lisp for fun or intellectual gain, I'd definitely recommend Racket.


Common Lisp is the obvious choice


related: Norvig's implementation of lisp in python:

http://norvig.com/lispy.html


Norvig's implementation of Scheme in Lisp:

http://books.google.de/books?id=QzGuHnDhvZIC&lpg=PA756&#...


Also a Scheme implementation written in Perl by Bill Hails - Exploring Programming Language Architecture in Perl:

(book & code) - http://billhails.net/Book/ | Previous HN post - http://news.ycombinator.com/item?id=1747132


1993: «Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.» [1]

20 years later: for Ruby users, situation improves! :)

[1] http://en.wikipedia.org/wiki/Greenspuns_tenth_rule


"Common Lisp" means the loop macro and CLOS, I think. And you guys still don't have those ;)



Is it normal that I don't have the slightest idea what I'm reading here, or is it just me?


If you don't know what Lisp is, and are a programmer, it's high time to go read about its basic principles.

Lisp is very simple and very powerful. Those who don't learn it are doomed to reinvent it, badly (cough XSLT cough).


I'm not exactly a programmer. Could you explain how Lisp can be simple and powerful?


A lisp program's code can be manipulated by the program itself. Lisp programs are self-aware.

It's a bit mind-bending, but from this simple foundation comes the power to easily create and assimilate ever higher-level abstractions (which is what programming is really all about). Parts of a program can be used as a template for creating other parts as needed. You can assemble these parts and shape the language into something very specific for the problem you are solving.

Other languages come part-way, but don't fully embrace this concept (code=data) because it has a cost: it seems difficult to learn because it requires a change in thinking; it seems difficult to write (and read) because you're directly encoding a data structure and it all looks a bit 'samey'. Where other languages have 'shapes' in their code that give the trained eye an indication of what the code is expressing (such as for...do loops and indentation), lisp code has one shape: a tree [1].

I don't know if I've reached beyond your understanding. I haven't meant to, but the takeaway should be this: lisp programs are self-aware and can operate on themselves because a lisp program is just data like any other. This makes it easy for the interpreter/compiler (hence 'lisp in x lines of code'), but more difficult for the programmer (at first). As with all things, practice and familiarity overcome these initial difficulties. It is worth it.

[1] http://en.wikipedia.org/wiki/Tree_(data_structure)


> code=data

This is a really good explanation, and I just want to add a small bit to it, having had to use Lisp as a primary language quite recently due to some AI projects in cognitive modeling. One thing that I've found personally is that by understanding Lisp, you can essentially gain a deep understanding of every __other__ programming language.

If I had the chance to do my entire Computer Science career over again, I would have learned Lisp as my first language, and then compared every other subsequent language against it. Because of the deep symmetry between code and data, it is one of the easiest languages for implementing and re-implementing programming language features on top of.


Thanks for your explanation.

I know a bit about programming but it only comes down to PHP or jQuery variables, functions, and loops. Lisp looks very interesting and I had already heard about it (through pg obviously). It might just be the language that will fascinate me if I ever took programming more seriously.


Chess has relatively simple rules (initial board layout, legal moves, win condition, some fanciness with en passant and castling, etc), but the game is endlessly complex, and there are more boards than a human being could ever see in his or her lifetime. Lisp is similar in that regard.


I'd say comparing it with Go might be easier, it has 8 rules, and can be played on a board of any size. I like to compare that to the 10 commandments in "The Little Schemer" or the 9 functions in μLithp.


Could you explain how Lisp can be simple and powerful?

You've asked a profound and important question, not so much about Lisp as about systems in general. A system is simple when it doesn't have many pieces. But if the pieces it starts with are very general, and the system provides good ways to put them together, then you can build complex things out of a tiny initial set.

Think of a classic Lego set. There are only a few "primitives" (different kinds of Lego piece). And there is an easy and standard way to put them together. Starting from these simple initial conditions you can construct very complex structures. Lego is simple and powerful at the same time.

How do you build a complex Lego structure? You start by putting a few pieces together to form a cluster. And then you make a second cluster and you join the two clusters together to make a third, and so on. The critical property here, the reason why Lego stays simple all the way, is that the "operation" you use to join two clusters is exactly the same operation you use to join two individual pieces – you interlock their knobby bits. In this sense your clusters are still "Lego pieces", just custom ones that didn't ship with the initial set: they still have knobby bits suitable for joining up with any other piece. This quality is sometimes called "regularity", meaning that the initial properties of the system are preserved as you put things together into ever more complex forms.

Suppose that weren't the case and that each time you reached a certain level of complexity you had to learn a new technique for building further. That would make Lego much less simple-and-powerful. You could still build complex things, but it would be harder and more complicated. And there would probably be a bunch of things that, while you technically could build them, it would be so hard and complicated to do that nobody would bother.

Programming languages also consist of a set of initial pieces and ways of putting them together. But most languages don't have the quality of regularity that Lego has. Their initial pieces aren't universal enough to get away with having only a few, so they need many more of them, and that means they're not simple, and that means they're not simple-and-powerful. There are, however, a few programming languages that do have this regularity. Lisp is one of them. Smalltalk, Forth, and APL are others.

The really interesting thing is that this kind of simple-and-powerful system enables you to do qualitatively different things as complexity grows — not because other languages can't do them, but because they make it too much trouble to bother. For example, because Lisp programs are Lisp pieces in the same sense that Lego structures are Lego pieces, it's easy to write Lisp programs that take other Lisp programs as their inputs and do meaningful things with them. That is a powerful technique, and because it's so easy, Lisp programmers exploit it heavily, more than is practical in most other languages, and that means they can do more with less.

This quality of simple-and-powerful is in my view very important and underutilized for managing software complexity. We don't yet understand it all that well, because the programming languages that achieved dominance to date don't have it.



It would be more precise to say minimalistic, rather than simple.


So true re XSLT etc.


http://www.defmacro.org/ramblings/lisp.html

I have been playing with a Clojure, a Lisp dialect, for six or so months (it seems so much longer), but this was still really interesting to me.


I can't say for sure- but I would guess that it would be normal for a lot of HN articles to go over your head if you are not technical or lack a C-S degree.


I do lack a C-S degree, however I've been a PHP developer for years and recently switched to Ruby, or more specifically RoR.


This video course should explain the whole situation and more and is targeted at undergrads with essentially no background knowledge:

http://ocw.mit.edu/courses/electrical-engineering-and-comput...

Ignore the hairdos, it's from the 80s, the information itself ages better than the fashion.


Very useful, thanx!


You should use Docco for your page, I find it much more readable: http://jashkenas.github.com/docco/


Ah, but this is org-mode! He can evaluate his code inside his editor! Grumble rah rah emacs!

He could probably set up his publish templates to accomplish the same formatting.

(huge org mode nerd, checking in)

--- EDIT ---

And what do you know, Eric Schulte already implemented this for org mode! It's checked in under contrib.

Instructions: http://eschulte.me/org-docco/org-docco.html

Org source: http://orgmode.org/w/?p=org-mode.git;a=blob_plain;f=contrib/...


I'm quite familiar with Docco having helped implement something similar for Clojure.[1] I wanted to use org-mode. Sorry it's not readable.

[1]: http://github.com/fogus/marginalia


Or, well, Rocco in this case: http://rtomayko.github.com/rocco/


Cool project, but I think it's been done before in less lines: https://gist.github.com/4176087


I've never understood the line-count craze some people get on. If it was that important, we'd all be using J or APL. Since we're not — in fact, almost nobody is — it must not be that critical.


> If it was that important, we'd all be using J or APL.

Fallacy. You haven't eliminated the possibility that line count is important, but also that too few lines is also bad. It's widely understood that line count, even though it's a hazy and exploitable metric, is indeed important. It's also widely understood that being too terse is as bad as being too verbose. Time to brush up on some old-school CS curriculum books. (Mythical Man-Month? I'm not sure if the above is discussed there, but it's a good place to start.)


I haven't proved anything, true — I was illustrating a point with an example, not writing a series of syllogisms. What I meant to do was to direct the reader to the broader idea that our language usage doesn't suggest that line count is a pressing concern past a point that allows for considerable verbosity.

Now, wasn't it more enjoyable the way I put it first?


It's more accurate and less misleading the way you put it second. The way you put it first, it suggests that you've shown that line counts don't matter.


It's just supposed to be fun, challenging, and a learning experience.


The linked gist has 40 lines (36 LOC); the article from the post boasts 27 lines.


That's based on an older version of uLithp that had a couple bugs.


Who’s up for implementing a μRuby on top of this μLisp?


I love the economy of lisp implementations in dynamic languages. It's especially nice because you don't have to worry about the memory management.

On the otherhand, I've been interested in doing a lisp in low memory environments (uC's, etc). I've done a dialect in C using a semi-space garbage collector. But I'm curious if anyone's done any work on lisps in resource constrained environments.


Really? Lithp? Not that I'm offended, but the pun seems unnecessary.


Is necessity a prerequisite for humor?


It's not. And it's totally understandable how this could pass for humor in middle school.


My one question is: Why?


It's weird that this question keeps coming out in several cool but not that useful HN posts.

Some times it's cool to do things just because you can, maybe he was bored or he always wanted to implement a lisp dialect.

It might not be that useful, but i'm sure that the autor had a blast doing it, otherwise he wouldn't have done it.


But really the "why" is that it was fun. Also, it was my submission to the PLT Games. http://pltgames.com/competition/2012/12


Why not?


To learn.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: