I don't know exactly the current status or roadmap (if there is one), but the idea is to have both eventually. Packages and files will be cached automatically so that they load much more quickly, and it will be possible (eventually) to output standalone executables.
I haven't used numba or benchmarked it, so I'm not really qualified to comment. I do however vastly prefer the MATLAB-like syntax of julia to that of python.
Is this going to hamper it from being the next python for both general and data science programming? Do you think I should invest in learning julia, or continue with python?
The CMU course is very traditional. It covers basic exploratory data analysis (summary statistics, plotting data), basic probability, and hypothesis testing and estimation. There's no programming, nothing Bayesian, and only brief discussion of regression.
(I taught 36-201, the intro stats course that was used to build the OLI course, this summer.)
Statistical Inference, on the other hand, seems to take a Bayesian perspective and is very much not your standard intro stats class. It looks interesting and I'll have to skim through some of it.
Well that was the point of my original message. I am using numbapro and easily getting to c-speed with R-like vectorized convenience right now. It's why I question the "speed" argument as a non-argument when compared with Python. And I haven't even started using the cuda approach...
You allude to another point though: Python is not standing still. Python and its environment is a mighty high mountain for Julia to climb if it's not going to move the game forward significantly so that its big ecosystem disadvantage is compensated. Julia cannot just do incremental improvement - it doesn't have enough momentum to make that a winning strategy. It needs to leapfrog to take on Python.
I should add one more point though. The post mentions Matlab 15 times and Python only 7. It's possible this whole Julia effort will be successful with the Matlab crowd which, up to now, has been watching with horrified fascination from the sidelines as open source ate its lunch.
Blaze has a bit of a broader focus than what I was talking about, since blaze mostly offloads the actual computation to a particular backend. But a combination of blaze and a custom lowering of the computation into machine code using numba would be similar (although without the type safety for guaranteeing that certain optimizations are possible)
I've used that library, and I really don't like it at all. They tried to bring the R syntax to python, which ends up looking awful and is missing the point of the Graphical Grammar. In the same way that every language has it's own way of expressing control flow, every language should have it's own way to express the Graphical Grammar. We don't need R's GGplot2 in python, we need a pythonic way to express the Graphical Grammar.
If I had stronger python-fu I would love to build "GGPy".
Maybe. The plots look a lot like GGplot2 plots, any the syntax looks like python, but I haven't dug in to it to see if it builds plots using the Graphical Grammar under the hood.
we've shifted to plot.ly for visualization and moved away from expensive BI tools. We set up python/pandas scripts on CRON to output realtime data to our local plot.ly web server, which makes a local copy of the data and updates the chart. You simply embed the chart as an iframe where ever want internally, and BOOM, you've got a real time chart (beautiful I may add).