Because it's already fast enough for most of us ? Anecdote, but I've had my share of slow things in Javascript that are not slow in Python. Try to generate a SHA256 checksum for a big file in the browser...
Have you tried to generate a SHA256 checksum for a file in the browser, no matter what crypto lib or api is available to you ?
Have you tried to generate it using Python standard lib ?
I did, and doing it in the browser was so bad that it was unusable. I suspect that it's not the crypto that's slow but the file reading. But anyway...
> SHA256 in pure Python would be unusably slow
None would do that because:
> Python's SHA256 is written in C
Hence why comparing "pure python" to "pure javascript" is mostly irrelevant for most day to day tasks, like most benchmarks.
> Javascript is fast. Browsers are fast.
Well, no they were not for my use case. Browsers are really slow at generating file checksums.
I thought that perhaps the difference could be due to the JavaScript version having to first read the entire file before getting started on hashing it , whereas the Python does it incrementally (which the browser API doesn't support [0]). But changing the Python version to work like the JavaScript version doesn't make a big difference: 30 vs 35 ms (with a ~50 MB file) on my machine.
The slowest part in the JavaScript version seems to be reading the file, accounting for 70–80% of the runtime in both Firefox and Chromium.
Maybe 8 years is not much in a career ? Maybe we had to support one of those browsers that did not support it ? Maybe your snarky comment is out of place ? And even to this day it's still significantly slower than Python stdlib according to the tester. So much for "why python not as fast as js, python is slow, blah blah blah".
The Pytthon standard lib calls out to hand optimized assembly language versions of the crypto algos. It is of no relevance to a JIT-vs-interpreted debate.
It absolutely is relevant to the "python is slow reee" nonsense tho, which is the subject. Python-the-language being slow is not relevant for a lot of the users, because even if they don't know they use Python mostly as a convenient interface to huge piles of native code which does the actual work.
And as noted upthread that's a significant part of the uptake of Python in scientific fields, and why pypy despite the heroic work that's gone into it is often a non-entity.
This is a major problem in scientific fields. Currently there are sort of "two tiers" of scientific programmers: ones who write the fast binary libraries and ones that use these from Python (until they encounter e.g. having to loop and they are SOL).
This is known as the two language problem. It arises from Python being slow to run and compiled languages being bad to write. Julia tries to solve this (but fails due to implementation details). Numba etc try to hack around it.
Pypy is sadly vaporware. The failure from the beginning was not supporting most popular (scientific) Python libraries. It nowadays kind of does, but is brittle and often hard to set up. And anyway Pypy is not very fast compared to e.g. V8 or SpiderMonkey.
The major problem in scientific fields is not this, but the amount of incompetence and the race-to-the-bottom environment which enables it. Grant organizations don't demand rigor and efficiency, they demand shiny papers. And that's what we get. With god awful code and very questionable scientific value.
There are such issues, but I don't think they are a very direct cause of the two language problem.
And even these issues are part of the greater problem of late stage capitalism that in general produces god awful stuff with questionable value. E.g. vast majority of industry code is such.
fyi: the author of that post is a current Julia user and intended the post as counterpoint to their normally enthusiastic endorsements. so while it is a good intro to some of the shortfalls of the language, I'm not sure the author would agree that Julia has "failed" due to these details
Yes, but it's a good list of the major problems, and laudable for a self-professed "stan" to be upfront about them.
It's my assesment that the problems listed in there are a cause why Julia will not take off and we're largely stuck with Python for the foreseeable future.
It is worth noting that the first of the reasons presented is significantly improved in Julia 1.9 and 1.10 (released ~8 months and ~1 month ago). The time for `using BioSequences, FASTX` on 1.10 is down to 0.14 seconds on my computer (from 0.62 seconds on 1.8 when the blog post was published).
There is pleeeenty of mission critical stuff written in Python, for which interpreter speed is a primary concern. This has been true for decades. Maybe not in your industry, but there are other Python users.
The point of Python is quickly integrating a very wide range of fast libraries written in other languages though, you can't ignore that performance just because it's not written in Python.
Good to see progress anyways.