Hacker News new | past | comments | ask | show | jobs | submit login
Why is Clojure so slow? (martinsprogrammingblog.blogspot.de)
80 points by Mitt on July 10, 2012 | hide | past | favorite | 94 comments



Clojure has a slow startup time. It is not in itself slow.

It's important to get these distinctions right! Some people, unfortunately, form impressions from headlines and in this case it's extremely inaccurate. If you read the article you'll get the truth - there are startup time problems causing issues with using Clojure for command-line programs.

That's much less damning overall than "Clojure is slow."


You should have read the entire article:

> How fast is Clojure at running your code once it finally has got going? ... Clojure is on average 4x slower than Java and 2x slower than Scala.

That's pretty freaking slow.


And about twice as fast as Erlang, 5x faster than Ruby, Python or PHP and 10x faster than Perl.[1]

It's all relative.

[1] http://shootout.alioth.debian.org/u64q/which-programming-lan...


Those benchmarks are fairly useless, because it doesn't compare idiomatic usage of those languages, but rather the capability of those languages to drop to low-level primitives and libraries.

I did an experiment once, testing out a simple web-service on the JVM and Scala (with Scalatra) yields the same performance as Java (with Jax-RS), while Clojure (with Noir) is only 2x to 3x as slow and JRuby (Sinatra) is only 4x-5x as slow as Java.


>>Those benchmarks are fairly useless, because it doesn't compare idiomatic usage of those languages...<<

You seem to be complaining that the programmers were not forced to write slow programs :-)

>>I did an experiment once...<<

And you used libraries.


No, I'm complaining that those benchmarks aren't mirroring the way developers actually write code.


What specifically do you think the difference is, and how do you know?


This is the same benchmark that is used in the OP. So if you object to the characterization of Ruby/Python/PHP vs Clojure, you can make the same objection regarding the characterization of Clojure vs C.


While interesting, those benchmarks are only useful if you implement something remotely similar to the actual benchmarks. Just glazing over them, they seem to be "unfairly" targeted at low-level languages.

That said, they are fun to look at. I just had a wtf-moment looking at this:

http://shootout.alioth.debian.org/u32/performance.php?test=n...

~20 seconds vs ~20 minutes??


This sums up my feelings for the benchmarks:

http://shootout.alioth.debian.org/u64q/benchmark.php?test=fa...

Note that the "alternative" Lisp SBCL and Java 7 programs both outperform Fortran.

Of course I agree with you on wtf-moments. WTF makes Ruby take an hour, and SBCL take 10 seconds? That's two orders of magnitude! But no matter, I imagine a more clever Ruby programmer could reduce that, or just call out to a native library.


>>Note that the "alternative" ...<<

A program that simply switched according the command line arg and then printed:

    3968050
    Pfannkuchen(12) = 65
:would also out perform :-)

>>I imagine a more clever Ruby programmer could reduce that...<<

Is "a more clever Ruby programmer" some kind of equivalent to "a sufficiently smart compiler"? :-)

>>or just call out to a native library<<

When is a Ruby program fast? When it's written in C ;-)


> A program that simply switched according the command line arg and then printed…

Of course making programs do less can improve speed, and a great way of doing that is compile-time computation via macros! You can finish the program before it's even run.

> Is "a more clever Ruby programmer" some kind of equivalent to "a sufficiently smart compiler"?

No, since we assume human intelligence here. :P As the Graphics Programming Black Book puts it in the Chapter 1 title, "The Best Optimizer is between Your Ears".

> When is a Ruby program fast? When it's written in C ;-)

I'm going to use this one.


I've spent enough time asking for programs for the benchmarks game in Ruby forums, to start to doubt whether the "more clever Ruby programmer" will ever come forward ;-)

Maybe "a more clever Ruby programmer" always drops-down to C?

Maybe there's only a more clever Rails programmer :-)


1) those benchmarks are only useful if...

Any benchmark is only useful if...

http://shootout.alioth.debian.org/dont-jump-to-conclusions.p...

2) "unfairly" targeted at low-level languages

Don't make "unfair" accusations -- say why you think that.

3) ~20 seconds vs ~20 minutes??

Did you mean vs ~20 hours?


> Don't make "unfair" accusations -- say why you think that

Because there seems to be a bias in selecting the problems solved in the benchmark games: to be "fair" they should be randomly selected from a pool of all possible problems solved with computer programs. Or, maybe, the frequency of these problems in the real world should be taken into account?

Of course, they're not actually unfair since they hide nothing. The error is in the interpretation, e.g. "My program will be faster if I write it in language X instead of Y."

The nice link you posted sums it up well: "Programming languages are compared against each other as though their designers intended them to be used for the exact same purpose - that just isn't so."

edit: Spelling..


1) "they're not actually unfair"

So don't say that they are!

2) "The nice link you posted sums it up well"

I agree - but then I wrote those words.

3) "there seems to be a bias in selecting the problems solved in the benchmark games"

You still haven't said anything that suggests they are "targeted at low-level languages".


1) I didn't .. 3) I did in GP


1) You implied "unfair" by opining on what they would have to do to be "fair"; and you said something less vague than "unfair", you said "unfairly" targeted at low-level languages. If you don't mean to say they are "unfair" then your words are going to confuse ;-)

3) GP? Your opining about "selecting the problems" doesn't support the claim "targeted at low-level languages" anymore than it supports the claim targeted at high-level languages :-)

As you noted the important thing to remember is that timing measurements are not promises, and they aren't general answers to the question - Will my program be faster if I write it in language X?


On the other hand, those languages do not have a pretension of being fast.


You can always match the speed of java in critical areas by disabling number boxing, adding type hints, and using the mutable versions of data structures. This is what prismatic did for their ML computation code: http://www.infoq.com/presentations/Why-Prismatic-Goes-Faster...


> That's pretty freaking slow.

I suppose in a vacuum it is indeed (assuming the measurements are general), but in my actual use the speed of Clojure has been more than sufficient. People like to view programming language speed as a black and white issue in the context of blog and HN posting, but in many uses runtime and startup speed considerations are only part of the puzzle.


Slow/fast is a subjective metric, however in case of a language like Clojure you have to pay the price for convenience in the form of a performance tax when compared to Java or Scala.

This may be acceptable, depends on where you're coming from and what you're needs are. For instance if you are coming from Ruby or Python because you want much better performance and you think the JVM is awesome for that, then Clojure may not be a good choice. On the other hand if you want a modern Lisp that's also reasonably fast and that has access to the JVM, then Clojure is a really good choice.

Personally that's why I like Scala, but you know, the best language is whatever makes you happier and more productive.


Tax is the wrong word... It's not the case that the Clojure (the language, the developers, etc) benefits from slow startup. Fine is probably more accurate.


"Performance" is never the end-goal, but rather a coin you can trade for other things, like convenience, lower latency, fewer hardware resources, getting results faster, scalability, etc... basically anything that makes you or your clients happier. When people say they want performance, they actually mean they'd like the flexibility to gain something else by paying with performance.

That's why I used "tax" which I think is very suitable.


Actually part of the reason startup is slow is that the JVM's JIT compiler doesn't optimize its compilations until it's determined that a given method is in a hotspot that would benefit from the optimization. Because it relies on runtime information, it's able to perform much better-informed optimizations than it would with a fully up-front compiler, but the price you pay is that the optimizations are deferred.

So in this case tax is exactly the right word.


To Clojure-curious people: please ignore this article. It is misinformed.

Others on this thread have pointed out that the Alioth benchmark takes startup time into account. Yes, this imposes a startup penalty on Clojure.

More importantly, the implementations of each individual benchmark vary significantly in performance quality. High-performance Clojure requires a couple of tricks in type hinting, using unchecked arithmetic, and preferring native Java arrays.

I looked at a couple of the benchmarks, and the mandelbrot example uses those tricks. Notice the performance there: http://shootout.alioth.debian.org/u64/performance.php?test=m...

Notice that Java 7 and gcc run at about the same speed, and Clojure is only about 2.2x slower (than C). Scala is about 1.9x slower (than C). gcc is about 1.5x slower than Intel Fortran.

To make the benchmark more fair: (1) all timings should disregard startup time; and (2) all JVM languages should have the opportunity to run the benchmark a few thousand times before timing it. Otherwise, it measures JVM startup time, and then it measures how long it takes the JIT to achieve maximum optimization. By comparison, the C and Fortran code runs at full speed almost out of the gate.

All-in-all, considering that Clojure is an extremely high-level language I consider its performance impressive [1]. Yes, the inner loops need to be coded in a slightly un-idiomatic manner, but you can do all this in the comfort of your REPL, which makes the process of making optimizations reasonably painless.

[1] Don't forget to scroll down the mandelbrot results and look at the stellar performance of other popular high-level languages.


>>"Otherwise, it measures JVM startup time, and then it measures how long it takes the JIT to achieve maximum optimization."<<

Please take that Clojure mandelbrot program, make repeated timing measurements without restarting the JVM and then report how those times compare to cold start on your computer.

The mean "warmed" times for the Java mandelbrot program were actually slower than the reported cold start time for the same program.

http://shootout.alioth.debian.org/help.php#java


Sure thing. I made it run for 4000 cycles twice. The first (cold) run took 2831.323 ms, and the warmed-up run took 2571.397 ms. About 10% faster.

Judging by the invocation noted at the bottom of http://shootout.alioth.debian.org/u64/program.php?test=mande..., the comments about Java in the FAQ do not apply to the Clojure code. The benchmark seems to have been invoked straight from the command line.


Cool! Now let's check the basics:

- 2831.323 ms for what workload? The benchmarks game measurements are made at 3 different workloads; but the times that matter are those for the largest workload, in this case N=16,000. So please show the times for N=16,000.

- the benchmarks game measurements are made with output redirected to /dev/null

- both the clojure and java programs are invoked straight from the command line, and include start-up. The Help page provides additional "warmed" measurements for the fastest Java programs, for comparison - because sometimes the JVM startup costs are larger in the mind than they are when measured :-)


Well certainly, in a real-world program which runs for long periods analyzing tons of data, JVM startup costs and JIT costs amount to nothing, as they are paid in the first few seconds. But we are talking about short, synthetic benchmarks here.

I ran mandelbrot for 4000 cycles, just invoking the function which does the work twice and wrapping each call in Clojure's (time ...) form. This all happened in an AOT-compiled .class file, which guaranteed a cold start. For what it's worth, I didn't bother tuning the GC or any other JVM parameters — I suspect I could have made it run a bit faster by manipulating generation sizes.


>>I ran mandelbrot for 4000 cycles<<

That reduced workload only runs the program for 1/10th the time of the workload shown on the benchmarks game website.

Run the program for N=16,000 and see that "JVM startup costs and JIT costs amount to nothing" even for these "short, synthetic benchmarks".

(Incidentally, the "usual" cold start measurement shown on the website is the best of 6.)


Don't compare Clojure to C, comparing it to Python or Ruby makes more sense because they're all high-level dynamic languages. Sure Clojure runs on the compiling JVM but that doesn't put it into another category; you can also JIT compile Python in a few ways.

If we can make working websites and even games in Python, we can make them in Clojure too. What kills Clojure for small scripts and not-long-running applications is the startup time and that can mostly be attributed to JVM. Also, the memory footprint of a JVM process tends to grow a lot over time.

I see Clojure rising above specific platforms, though. JVM is in a slow death spiral. CLR might have some traction on Windows. If Python got it platform resettled on something more sophisticated than CPython, that might be a good ecosystem for Clojure.


What kills Clojure for small scripts and not-long-running applications is the startup time and that can mostly be attributed to JVM

Did you not read the article? Quote: "What we can see is that Java itself accounts for 0.35s of the startup time, but unfortunately Clojure adds another second(!) on top of that."

So no, it can not be mostly attributed to JVM.

Also, JVM in a low death spiral? What gives you that impression, the growing number of languages that target it? The fact that new versions continue to receive improvements, such as better support for dynamic languages in v. 7?


Don't compare Clojure to C, comparing it to Python or Ruby makes more sense because they're all high-level dynamic languages

For me the more obvious comparison would be with other Lisps. Which, back in the dim past when I was using 'em, were often damn fast.


If you're looking for a fast JVM-based Lisp the obvious choice would be Kawa Scheme: http://per.bothner.com/blog/2010/Kawa-in-shootout/

But as fogus said, speed is only one piece of the puzzle. http://news.ycombinator.com/item?id=4223562


>JVM is in a slow death spiral.

How so?


Maybe the typical JVM bashing.

In most enterprises, regardless what we think about the JVM, it is still quite healthy.


For my part, I don't trust Oracle to be a good steward for Java. Who knows - time will tell.

It's not all gloom and doom though; Attila Szegedi is at Oracle now, working on Nashorn - something exciting for Java 8.


Unfortunately for Oracle, Java represents a genie that's out of the metaphorical bottle. They have the privilege to be its benevolent steward, however if they keep pushing for control in the face of its community, they'll lose whatever control they have left.

The recent lawsuit kind of highlights that. Google is riding on years of development and refinement of Java IDEs and on mountains of available open-source libraries. They jump-started an Android community from zero by building an alternative VM for targeting Java source code, while giving the finger to Sun/Oracle and their licensing.

And I don't think they are so stupid as to not realize this.


What I miss in Java is the ability to compile directly to native code as part of the official SDK, instead of using expensive third party compilers.

The problem I see with Oracle is that if they push too much the community, companies might abandon it the same way they did with Delphi when Borland did too many mistakes. On the other hand, I can speak from my experience in the enterprise world, corporations love Oracle.


So he left Twitter. I didn't realize that. Exciting indeed.


How slow is slow?

If it takes me 3 months to deliver a given program in Clojure and 6 to deliver its Java equivalent, the Clojure one already has 3 months of lead. Assuming the Java one is twice as fast, it'll take 45 days to catch up.

Development time is expensive, computers are cheap and get twice as fast every year or so.

While a long startup time is annoying, it can certainly be optimized out if someone focuses enough attention to the low level aspects of the runtime.


I get what you're saying, and you're sort of right, but it's still interesting to figure out why that startup time is so slow, or what other things slow it down. That talk from Daniel Solano Gómez about Clojure on Android was pretty interesting in that regard. I would want to encourage that kind of investigation, although this blogpost has a terrifically flamebait title.

> Development time is expensive, computers are cheap and get twice as fast every year or so.

It's a bit funny that you're citing Moore's law when Clojure is specifically designed to overcome its breakdown and get out ahead of that lagging curve. (Paraphrasing an early talk from Rich Hickey: "The hardware guys are punting!!")

You can turn that right around as fuel for your original point, that Clojure and it's approach to concurrency buys you tons of developer productivity, compared to whacking about in the weeds with Java. I totally agree there, those higher level features are valuable and worth something, but they do not cost nothing.


single thread performance isn't increasing that quickly, but machines like the Xeon Phy should be rather sweet for highly threaded (or processed) apps. Also, if we can make GPUs run Java bytecode, we would unlock a whole lot of GFLOPS that are just pushing pixels now.


That 45 day head start must be divided by the number of users running the program though.


Each of those users would be running it 45 days sooner so that cancels out.


No it doesn't. Development time is a one off (for a given feature set). Usage is recurring, so savings in running time catch up and eventually dwarf development time.

But I don't think this calculation makes much sense in the first place. It's simply not that linear and depends on many other things, for instance whether it's a throughput or response time problem, the relative value being the first to market versus being the best, etc.


> Development time is a one off (for a given feature set)

Indeed. With a feature set dynamic enough, the lead will mount up.


Or the lead may just shrink more slowly.


Time running the app is both irrelevant compared to the time it takes to develop it. Also, as long as the app offers the answer quickly enough from the user's perspective, fast enough is fast enough and being able to iterate twice as fast as your competition is the ultimate advantage.


Making it fast enough from a user's perspective comes at a cost though (where it is possible at all). If you have many users and you have to pay that price for each one, it becomes very expensive.

All I'm saying is that at some point it becomes way more expensive than paying developers to optimize or rewrite in a faster language, which is exactly why Google and Facebook are doing so much work in C++, not exactly a language known for developer productivity.

And obviously there are many places where you can't scale your way out of a response time or battery usage issue because you're not the one buying the machine.

So I totally disagree with your assumption that developer productivity always trumps runtime efficiency. It is also my experience that the productivity advantages usually ascribed to some (mostly dynamic) languages is way overblown. But that's another debate.


Even if I have to rebuild part (or all) of the application in some future moment, the initial productivity boost is worth a lot. Rebuilding a program that is already running is usually far less painful than building a fast one from scratch. At least the developers will have a test suite they can use to check whether their version is correct. And how much faster.

I agree there are cases where only the leanest and meanest code will do, but my point is that those cases are very rare.


Development time != execution time


That was the point. Execution time is mostly inconsequential when compared to things like development time and competitive edge provided by the language you're using - you know, the actual money makings factors.


"When using the ClojureScript compiler on a hello word example with advanced optimisation, we end up with some 100kb of Javascript. [...] The Google Closure compiler certainly helps here by removing lots of unused code, and the resulting Javascript file is indeed free from all docstrings etc."

So does the ClojureScript compiler basically just embed a Clojure interpreter in every file? I'd be interested to see the code prior to optimization.


No, ClojureScript does not interpret ClojureScript at runtime. The Clojure forms are compiled down to JavaScript directly. More info at http://blog.fogus.me/2011/07/21/compiling-clojure-to-javascr...


I wonder if his figure is simply mistaken, then. Because 100 kilobytes is a heck of a lot of code for a hello world. The compiled representations in your blog post seem far more reasonable.


100K is a lot of code for a Hello World, so don't use ClojureScript to write Hello World apps. The cool part is that building a largish app will not necessarily grow the output JS.


Sure thing, but my curiosity still stands. :) If you're using Closure Compiler to perform dead code elimination (100kb is the "heavily optimized" number, as far as I can tell), how is it that `console.log('hello, world!');` requires 100kb of essential (non-eliminatable) scaffolding?



Have a look at Rich Hickey's keynote presentation from Conj2011; http://blip.tv/clojure/rich-hickey-keynote-5970064

He pretty much starts off by talking about making Clojure "leaner", faster at starting up etc.

He mentions stuff like a "production" jar with less metadata, hoisted evaluator and even some kind of tree shaking ala ProGuard.


If Clojure could dump the Lisp image like SBCL's save-lisp-and-die function, the startup time would be greatly reduced. I wonder if JVM itself can dump its current state and then restore execution.

Another approach to the problem would be similar to FastCGI: keep one Clojure server process running and execute scripts on it.


> If Clojure could dump the Lisp image like SBCL's save-lisp-and-die function, the startup time would be greatly reduced. I wonder if JVM itself can dump its current state and then restore execution.

It's still hard to understand why aren't Oracle working on something like that. It's not as if they don't care about desktop at all -- JavaFX is going to be part of Java8 and they are even working on a new packaging tool-chain for it. JVM's start-up time and inability to allocate memory when needed (as compared to up-front way it's done now) are the major reasons why Java/JVM is (still) a bad solution for desktop and cli applications.


There is something like that as internal research project, but never made into the mainstream JVM.

http://java.sun.com/developer/technicalArticles/Programming/...


Interesting - that paper talks about isolates, which are implemented in Dart. Dart core team members used to work on HotSpot and other Java tech (CLDC).


"JVM's start-up time and inability to allocate memory when needed (as compared to up-front way it's done now).."

Care to elaborate? Are you saying that the JVM can't malloc()? :)


As far as mine understanding of JVM goes it works with memory in quite a different way. When JVM starts up it allocates large chunk of memory up-front (usually hundreds of megabytes) even though it doesn't really need all that memory. Usually only a part of that memory is actually used at any given moment. JVM will manage that memory with custom internal malloc/free which will allocate memory to Java objects. All this is needed to implement efficient garbage collection. See [1] for details.

Such complicated memory management is really efficient and works great for server apps. On the client -- not so much. In that case I would rather prefer a slower GC that just uses malloc/free. Why? Because from end-user perspective that's not really great when a simple app grabs a 100-200mb of memory. If all apps were like this you wouldn't be able to run many of them at the same time. (Especially on the older hardware.) Moreover for a complex JVM apps 500mb+ of allocated RAM might be a requirement. [2]

Great example of how things should work is .NET/Mono. Both of the .NET runtimes allocate memory only when it's actually needed and start-up performance is great too. All things considered JVM and CLR are very similar runtimes and there is no reason for a more client-oriented JVM implementation not to be possible.

[1] http://www.quora.com/How-does-garbage-collection-work-in-the...

[2] Eclipse for instance recommends to configure jvm to allocate at least 500mb up-front. (This is done via -Xms and -Xmx parameters.)


Sadly the JVM can't really do that (yet?). Even for normal Java applications caching the JIT would really help ramp-up time in servers.


It doesn't really buy you much since it's slower to read from disk than for the JIT to generate code on the fly.


Re: FastCGI approach. You can use Nailgun for all JVM languages: http://www.martiansoftware.com/nailgun/

Note that this addresses JVM startup overhead, but still not Clojure startup overhead (two are separate).


\ef in vimclojure uses nailgun, and it's not slow enough to notice.


It's an implementation problem of Java.

I never understood why the JVM folks didn't get along to develop a JIT cache. That means the first time I start a Java program it would run normally slow. But from the second run on it would use the native cache and run immediately fast with native performance. That would eliminate many performance problems of Java.

I know that there already is a solution which uses a Java server to serve the application as client which reduces the startup time but this is not very convenient to use.

The slowness of Clojure is a typical problem of all languages which are based on JVM. Racket Scheme, for instance, which is a Lisp like language but NOT based on JVM, needs just 0.062s to print "Hello World" (compiled) on my system.


Did you read the article? The majority of the Clojure startup time is spent on initializing the Clojure runtime.

"spends 95% of the startup-time loading the clojure.core namespace (the clojure.lang.RT class in particular) and filling out all the metadata/docstrings etc for the methods. This process stresses the GC quite a bit, some 130k objects are allocated and 90k free-d during multiple invokes of the GC (3-6 times), the building up of meta data is one big source of this massive object churn."


> The majority of the Clojure startup time is spent on initializing the Clojure runtime.

That's correct but even without this startup time Clojure is significantly slower than other functional languages. Look at SBCL and Racket in

http://shootout.alioth.debian.org/u32/which-programming-lang...

That doesn't mean that I don't like Clojure. I am even considering it for a business project. But Clojure is definitely unsuitable for small apps (shell scripts etc.)

Btw the benchmark listing doesn't take LuaJIT into account. This JIT is the fastest I have ever encountered, way ahead of JVM regarding startup time.


That "JIT cache" you're talking about already exists. It's known as AOT, or "Ahead of Time" compilation. I forget how you enable it, but it's there.


if you mean clojure AOT, it precompiles clojure to jvm bytecode. Usually the term JIT in this context is used to describe the native code generated by the VM on the flight, not the on the flight VM bytecode generation performed by a higher level language like clojure.

EDIT: sorry, probably your referred to http://publib.boulder.ibm.com/infocenter/java7sdk/v7r0/topic...


According to this link precompilation with AOT produces worse results than normal JIT execution. If so, what is AOT useful for?

"Da AOT-Code über verschiedene Programmausführungen hinweg bestehen bleiben muss, ist die Leistung von mit AOT generiertem Code nicht so gut wie die von mit JIT generiertem Code."


It helps with startup time. It's not that it produces worse code, it's that there's multiple levels of compilation. It stores the equivalent of -O1 on disk, and eventually some of the code can ramp up to -O3+. This is almost entirely to help startup time so that you don't have a bunch of code trying to compile during startup.


For dynamic languages such as Clojure or JRuby, a "JIT cache" would do no good.


Don't assume that an interactive language is necessarily interpreted. Clojure compiles to JVM bytecodes: http://clojure.org/compilation


Clojure startup time is slow. AOT helps a little. http://clojure.org/compilation

Clojure runs quite fast, in my experience. I have written an web app http://rssminer.net, in Clojure (and some Java). On a small VPS(512M RAM, 1 core CPU), It can handle about 300 request per second, On my desktop, about 2000 req/s. Which is not slow, at least.

The persistent data structures Clojure use is fast too. I did some test a long ago, It's roughly the same speed as Java collections.


I remember seeing somewhere that someone looked into tackling the problem of the startup time by stripping the clojure core libs of unessential metadata like docstrings etc

I'm not sure if they went through with it though


Why would one do that if that's slow and an image can't be saved? One does not need to have docstrings in the running Lisp. The typical solution to this problem is to have a file with docstrings, an index and look up the docstring for some symbol from the file when needed.


The Clojure's start-up is slow because the source files are compiled. But once your program is launched, its perfs are correct and very close to Java. About the immutable data structure, don't forget each method doesn't return a copy of the data. It's more clever using changes detection.

You can read a lot of information about this part of Clojure in the book "Practical Clojure". I'm reading it and I'm learning some stuffs about Clojure.

To conclude, Clojure is fine for "long" program running.


As others have stated, the OP was comparing run time separately from startup time, closure was found only to be about 4 times slower than java after startup costs are a storied away, which is quite believable given the benchmarks.

For most programs that aren't compute intensive, you won't notice a penalty, but the same is true with ruby or python.


What I would like to see is a `defconstant` macro, which introduces a constant that can not be changed anymore without restarting the JVM.

(def x 1) ; x = 1 (def x 2) ; x = 2 now

(defconstant y 1) ; y = 1 (defconstant y 2) ; Exception

Also a way of fixing functions would be good, so that no lookup is required. Calling such a fixed function has no overhead, it would be a direct call.


There is the :const metadata -

(def ^:const PI 3.14)


Well, there is defonce [http://clojure.github.com/clojure/branch-master/clojure.core...]

As for function lookup, I recommend reading this thread: http://news.ycombinator.com/item?id=2928285 Also, vars are by default static (but can be made dynamic with ^:dynamic).


`defonce` unfortunately doesn’t help. Without restarting the JVM I can overwrite that var. And that is okay, because defonce is just a protection mechanism, to not delete data when a namespace is repeatedly reloaded. This reloading occurs 99% at development time. Useful tool.

But I would like to have real constants. A final class with a static final field (and potentially type information), or something like that. This would give the optimal lookup time, as the JVM would have the direct address.

When I (defonce x 1) I can still (def x 2), without restarting the JVM. I want this to not be possible with a defconstant.


Vars already default static, so we just need to take better advantage of that in the compiler.


Yes, Clojure 1.4 defaults to static. However, a Var still needs to do an extra lookup at runtime. A Var does not directly point to the object that I want. And Vars have to be that way, this is their core feature. Clojure is a dynamic language, so they need to stay dynamic too.

All code constantly assumes that the objects behind Vars will change. Let’s say we have `(defn foo [] 1)`. The caller of (foo) will first lookup the address in RAM of the compiled function, and then jump to it. Because of this dynamicy we can redefine foo at runtime: `(defn foo [] 2)`. All callers still function, and will now get the result `2`. In statically compiled languages this would not happen, because `foo` is translated to the direct address of the first function. The concept of replacing functions at runtime doesn’t exist in that way.

But I would like to see an optional “static” programming feature: I want to be able to mark functions as final. This would be nice after major development has happened. A function object that Clojure created could then not be changed any longer, without restarting the JVM. But such functions can be called directly, without any overhead.


The jvm can already inline non-final virtual method calls, so getters and setters are free. Are you sure that being able to specify 'static' is a win? Alternatively, you can just use the compile-time capabilities of the macro system to inline your constants as locals.


user=> (defmacro value [& body] (let [retval# (eval `(do ~@body))] retval#))

#'user/value

user=> (macroexpand '(value (+ 1 3)))

4


I don't know if I should laugh or cry. Let's just say rock stars are not engineers.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: