Hacker News new | past | comments | ask | show | jobs | submit login

You need to build in some hyperbole. Python is a scripting language, and even amongst scripting languages, not a particularly fast one. So ...

Python, as a weapon, is like a government grant on which you have written "shoot a gun". You just tell it what to do and it somehow happens ...

using 500x the resources any sane human being would have used. If you examine what it did to launch a bullet found it built and operated an international airport including 5 passenger terminals and then used a very elaborate looking runway to fire the bullet. The amount of empty space between the buildings looks very impressive and significant.




The only reason Python should ever be problematically slow is if you're writing bad Python. (Well, almost ever.)

What you have to understand is that pythonic Python is largely a matter of invoking C libraries, and that Python only really runs at "interpreted" speed the first time you invoke the code.

It's no more slow than a shell script which serves to invoke grep and sed would be, because you're relying on the underlying optimized tools to execute the bits that would actually benefit from optimization. The microseconds it takes to switch from one to the other are almost never going to be a meaningful slice of the total execution time.

Instead of picking over the mostly insignificant delays that are left, Python has said "hey, let's optimize for readability & developer sanity instead — things that actually affect project outcomes." And then you go knock off 10 different issues you wouldn't have had time for, and you don't stab yourself in the eye with a fork while trying to make sense of the old code.


> What you have to understand is that pythonic Python is largely a matter of invoking C libraries, and that Python only really runs at "interpreted" speed the first time you invoke the code.

I do a bit of machine learning for work every now and then. I don't agree. Python is both really popular for machine learning and way too slow to let production stuff run in it.


I routinely do machine learning and computer vision, and have used a number of languages for them.

The main issue, irrespective of language, is that the implementation of your final model is often a distinct step from the last output of your experimental tools. If you just take what it gives you and try to deploy that, it will often provide extremely suboptimal performance.

The methods for implementing your final model could involve raw Python, NumPy, the Python wrapper for an external library, writing and consuming custom C libs...it depends on the complexity of the hyperplanes.

But e.g. scikit-learn already wraps libsvm and liblinear. If your SVM (etc.) is slow, it's very unlikely to be because you used Python.

If you're e.g. trying to do Facebook-level heavy lifting, your experiences may vary. But again, that would be a challenge for any tools. The solution is to use sampling, parallelism, etc. - and to implement and optimize your final model as a separate step from designing it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: