Hacker News new | past | comments | ask | show | jobs | submit login
A Lisper's first impression of Julia (p-cos.blogspot.com)
250 points by ananthrk on July 20, 2014 | hide | past | favorite | 38 comments



Comprehensive! Covers the Lisp-y influences of Julia in great depth.

My perspective on Julia is that it has 3 ingredients:

1. A principled design that derives from the experiences of past programming language and particularly the creator's experiences with Lisps. This is where a lot of the "magic" comes from: multiple dispatch, the type system, metaprogramming, etc. The article covers this aspect.

2. A need to be accessible to those transitioning from other languages, like MATLAB and Python. MATLAB, for example, has guided function naming (although Numpy also has similar names for similar reasons). The author mentions the lack of distinction between creating a variable and changing its binding: I'd suggest this is an example of something affected by this design point.

3. A need to be fast. The author brings up the Int vs BigInt distinction. Python, for example, allows Ints to get as big as you want but at a cost. Adding to Ints is not simply an add instruction, you must do a lot more work. Julia, falling on the side of performance, elects to distinguish between arbitrary BigInts and machine Int.


Regarding the Int / BigInt distinction, other issues besides performance, which haven't historically been prominent considerations in new language designs, are interoperability and transparency. In the current design, a Vector{Int} always has the same in-memory representation as it would in C or Fortran – you can take a pointer to the first array element and pass it to a library function using the C ABI and it will just work. You also know exactly how your data is represented and can reason about it. You know, for example, that a Vector{Int} definitely does not require any additional heap allocation besides the inline Int values and that arithmetic operations on Ints will just be machine arithmetic ops. I think that the transparency of the C data and performance models has been one of the major reasons for C's long-lived success. One of the design goals of Julia is to have similarly transparent data and performance models.


IMHO it is total fail to have int be dependent on the machine architecture. C99 fixed this behavior with types for specific sizes so people could finally write portable code. Julia should adopt 64bit integers by default given its intended audience and the reality that even some phones have 64bit processors. int64_t works on 32bit processors too, but with a performance penalty. Having the range of a variable depend on the machine architecture really went out of style a long time ago.


We considered that, but even though 64-bit ints work on 32-bit machines, they are dog slow. Insisting that integers are 64-bit everywhere is basically saying that you want slow for loops, slow array indexing – slow everything – on 32-bit systems. Clearly that's unacceptable in a language that is meant to be fast. So Julia has Int32 and Int64 when you want a specific bit size and Int is always the same size as your pointers. This arrangement is considerably simpler to deal with than C's "integers are whatever size I want them to be! [evil cackle]" approach. In particular, default integers and pointers are always the same size – which is not always the case in C (I'm looking at you, Win64) – so there's only one system-dependent size to worry about.


Great points. To add to the second point, much of the surface syntax also is also similar to Matlab/Octave/Scilab, in addition to function names being similar. Of course, this similarity is only superficial and by design.


That (and not having daft slow loops is) what got me into Julia from octave. I tried it a while back and the biggest issue was the in equivalency of array{n} and array{n,1} now that that's been fixed, it's awesome.

Parallelization in Julia is still a little scary though, I'm waiting a bit before delving into that.


This is a refreshingly specific post. Many articles of this kind ham-fistedly define various philosophical criteria throughout the post and make sketchy judgements within these. Here however there is just "here's the main comparative languages, here's the difference, here's where there may be issues".

NB. I'm very much in favour of a principled (qua philosophical) approach to language comparison (etc.) but its rarely done well.


It's always great to see post from real Lisper instead of yet another Lisp philosopher.


As an aside (and please do not take it as a flame), this is a very neat article that shows a class of languages in a paradigm I have never considered: Lispy languages (semantically) without Lisp morpho-syntax. I had heard of Julia of course, and see a few mentions here and there of Dylan. It is interesting Dylan had such little interest, or even similar projects, because everyone complains about Lisp syntax (as I see here, I am an amateur Lisper and I understand its history and appreciate it), but bemoans not having other languages with the power of homo-iconicity and other core parts from which the macro system and others gem are based upon (I forget the guy with that quote: keeping adding features to a lang, and you get a much shittier Lisp).

Why did these languages not take off (at least pre-Julia)? I have heard other people "debate" (and I use it hear to say disagreement on principle not on details of said debate that Ruby and other langs are Lisp-like, but fall short. Dylan seems to have been Lisp (proper) without Lisp syntax on purpose (after intentionally moving from the design phase). So why do languages with such powerful expressiveness (for your value of the word, I do not want to start that discussion either) never take off, Dylan or otherwise? It seems that is what all programmers, at least the ones more advanced than me, clamor for.


Aside from Apple having abandoned the language, the basic issue is that Dylan projects were very ambitiois. Dylan was aimed at C++, so Harlequin and CMU spent a huge amount of time developing sophisticated native code compilers, thread-safe GC, compilation to native executables, etc. Harlequin also did a whole IDE, with GUI toolkit and Emacs-like editor, all written in Dylan and self-hosted. Ruby, Python, etc, showed there was a market for simple dumb implementations that were nonetheless useful, and got to market quickly because it was easy to do a little C interpreter that did a dictionary lookup every other operation.

There's a renaissance in native-compiled languages now, mainly thanks to LLVM and the JVM. Having a fast optimizing compiler back end that generates binaries on many platforms is a huge head start, and goes a long way to making the language immediately useful. The JVM gives you those and then some.


> There's a renaissance in native-compiled languages now, mainly thanks to LLVM and the JVM

No, technology just goes in circles.

Like 30 years ago when people started to realize P-Code and other VM approaches were too slow and resource hungry to be useful targeting minicomputers.

Now mobiles and high electricity costs are making developers reach the same conclusions again.


> Now mobiles and high electricity costs are making developers reach the same conclusions again.

But rather than fall back on existing compiled languages, they are now trying to build something that has it all.


That is also not new. While the minicomputers struggled with VMs and got back to AOT compilation, research labs workstations already had mixed mode.

The first JIT were targeted at Lisp and Smalltalk environments, and commercial Lisps always had JIT + AOT compilation support.

As for going for something new, it is hard to bring people back to technologies that are no longer mainstream, without adding something new to it.


Dylan was a victim of bad timing if anything. Apple was in the middle of its worst period when they shelved it. And then Java came along and the third parties that were working on independent implementations got steamrolled by that. It's a shame, because despite its verbosity, Dylan is a powerful language and still one of my favorites.


Greenspun's Tenth Rule, is the one you're thinking of :)


It seems weird to try to characterize Julia in terms of object-oriented programming. Is that just me? Julia's approach to subtyping and multiple dispatch is sufficiently different from the C++ and Python approaches to OOP that I don't even put them in the same bucket, and it seems about as far away from CL's objects as well. Julia doesn't really advertise itself as OO; you can't even find the word "object" on the front page of their site. So I wouldn't try to think of it that way.

A lot of the comparisons in this article seem like that to me. Julia and Common Lisp are apparently just close enough to make a point-by-point comparison like this plausible, but things are not quite aligned close enough to make it work. It's still a good article with a lot of solid meat in it, but I think the topic would have been better served by going up the abstraction ladder a bit and talking about how the different paradigms of each language motivated the differences between them.

Disclaimer: I'm only somewhat familiar with Julia and not at all with Common Lisp.


It only seems weird in so far that you are ignorant (aren't we all?) of the generic function approach to object orientation.

In the message passing approach you dispatch based on the type of the first argument to the method. Because it would be redundant to explicitly write down the argument it is commonly syntactically elided (though not fully in Python) and you get coupling of the methods and the object. This is not a fundamental property of OO but accidental feature found in most OO languages.

In the generic function approach, the dispatch is extended so one can dispatch on the type (among other things) of (ideally) all of its arguments. Julia follows this approach afaik and if one is familiar with the generic function approach it is not a controversial claim at all.

Erik Naggum explains it here in more depth: http://www.xach.com/naggum/articles/3243735416407529@naggum....


I'm not familiar with Julia, but from a Common Lisp perspective it is very natural to closely link OO and function dispatch. Generic functions were added to lisp as part of its object system and make up a large part of it.


Indeed! My first OO course at the University of Minnesota (1992 or so) was object-oriented programming using Scheme (using SICP). I was more of a C++ programmer at the time and the different perspective was eye opening for me!


It seems like a stretch to classify Julia as Object Oriented. It has types and record structures, but these aren't strongly coupled to functions like you'd see in most OO-style approaches. It also doesn't encourage hiding data behind a bunch of "accessor" methods on objects.

There also isn't OO-style inheritance and polymorphism. Julia's multi dispatch reminds me most of Clojure's protocols, which is a great feature. It achieves polymorphic behavior of functions without having to do ugly things to the objects. That Julia can make this fast and ubiquitous in the language is pretty awesome.

I'm also new to Julia, but I really like the above design decisions, among others features.


I know that much about Julia, I just meant to say I'm not an expert or authority :). Anyway, "objects are not strongly coupled to functions", as you said, nails it.


I would very much like to read a comparison of CL and Clojure from the author at some point. As it seems he is offering a fair comparison.


> I would very much like to read a comparison of CL and Clojure from the author at some point. As it seems he is offering a fair comparison.

I suspect the authors main problem with Clojure is, that it is a mostly functional language which heavily emphasizes doing things in the functional way and discouraging imperative programming whereas CL is more like a true multiparadigm language.

I used to think that multiparadigm is best, but after migrating from Scheme which is mostly functional but has a lot of mutation and a sad lack of interesting datastructures apart from Lisp to Clojure which has good support for dicts and persistent data structures I think I prefer a community that is more focused on one approach.


> I suspect the authors main problem with Clojure

That's not a very nice thing to do, suspecting people without any kind of evidence. Not to mention the fact that there is a `set!` form in Clojure, which makes it entirely possible to write very imperative code (and thread-local semantics don't matter in single-threaded programs).

Anyway, "problems with Clojure" can be very different for different people. I like Clojure design as a language - even its interop with OO host features are very neat - but then when I want to hack some simple script in a REPL I not only need to write this:

    $ rlwrap java -cp "clojure-1.5.1.jar" clojure.main
but then I need to wait for freaking 6 seconds for the prompt to appear. 6 seconds. I don't know what more I could write here, so I'll just paste this (Chicken Scheme):

    $ time csi -e '(exit)'
    csi -e '(exit)'  0,01s user 0,00s system 81% cpu 0,007 total
So that's my problem with Clojure, nothing to do with "functional way", right?


> So that's my problem with Clojure, nothing to do with "functional way", right?

Sure. For a solution to your particular problem, maybe ClojureScript will bring some improvement for CLI use, since Nodejs tends to start faster than a whole JVM.


I feel the need to chime in: working in a language where the assumption is that everyone has agreed on a functional approach makes it much easier to stick to that approach myself. And, of course, reading other people's source code makes that much more sense.


It is in poor taste to put words in someone's mouth and proceed to 'debunk' said words.


I seem to remember his comments on some Lisp list on usenet or something, but I might be wrong, in which case I'm sorry and didn't mean to discredit the article's author in any way (I think that Pascal Costanza is a nice and smart fellow, and always had a positive impression of his posts, and Lisp communities used to be terrible places).


https://groups.google.com/forum/#!topic/comp.lang.lisp/HQFMh... has some discussion between the author and others from 2009 regarding Clojure and Common Lisp.


> I would like to see something like Oberon’s support for renaming identifiers on import in some Lisp dialect someday

I couldn't find any specifics on how it's done in the linked PDF (modules are described at the end, 11 section), but I think both Clojure and Racket do this already. The `require` mini langauge in Racket is very rich and allows for prefixing, renaming, selective import of identifiers and so on: http://docs.racket-lang.org/reference/require.html#%28form._...


What is the ecosystem for Julia like? Could it be considered for systems programming tasks?


No. Julia makes extensive use of garbage collection, and has potentially very spikey latency, because any time it runs code which it hasn't run before (or the same code with different types passed in) it runs the compiler. Oh yeah, so that also means it needs LLVM as part of the runtime system.

It is an amazing language for its intended use, which is algorithmic code. From a language perspective, I would chose it over Matlab, R, Numpy, etc any day. It is approachable for grizzled library writers (types, optimizations, introspection at many levels, macros, etc) and more "casual" untrained scientist types (who just want to punch in their algorithm and call it a day). But if you want systems programming, there are plenty of other languages which fill that niche better.


Define systems programming. If you mean could you write servers in it, then yes (there's a web stack for it already). If you want to write an OS kernel in Julia, then you probably could, but I'm not sure you'd want to.


There are times when having an English word for "no, well– yes, but I don't know why you'd want to" would be very useful. It'd have to be pretty short to save breath every time a computer scientist must answer the question, "But is it a systems programming language?"


Maybe 無 (mu) [0], which is something philosophically in between yes and no.

[0]: https://en.wikipedia.org/wiki/Mu_(negative)#.22Unasking.22_t...


mu is more like "you've made a category error" or "the question makes no sense" rather than "technically yes, but you wouldn't want to" - which is what the post you were responding to seemed to be aiming for


Wow. I really appreciate how thorough this post is!


I just wish they hadn't ruined the entire concept by going with one-based indexing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: