Love seeing the triple release. PyPy has come a long way on Python 3 since Py3k [1] and Fulcrum [2] and great to see a non-EOL Python 2 out in the wild.
Why do they hang on to Python2.7? I suppose it doesn't have so much maintenance cost for them.
I have to say that their official benchmarks point out a performance comparison to python 2.7, and at this time that sounds irrelevant - comparison to Python 3 would be relevant.
PyPy is written in RPython, which is a subset of Python 2 without most of the dynamic features, so it can be compiled. Moving it to a subset of Python 3 would be a major effort, as they would have to rewrite both the PyPy code and the RPython compiler.
And RPython, being a subset of Python, can also be interpreted, so they probably want to keep the Python 2 interpreter so they can interpret it.
Given how limited RPython is, are you sure RPython is not Python 2/3 compatible as is? What Python 2 only features it uses?
The faq that you've linked says that RPython will continue to run on Python 2 as long as Pypy exists, it says nothing about running RPython on Python 3 interpreter.
To predict COVID treatment side effects, we built a whole-transcriptome Trie with Scylla and PyPy and it sped up a bajillion times over raw python. Good times
Ah, I'd assumed a typo; thanks! However, I can't tell if/how much my concerns hold:
> You don't need to replace the Python interpreter, run a separate compilation step, or even have a C/C++ compiler installed. Just apply one of the Numba decorators to your Python function, and Numba does the rest.
Yes, some code won't work -- numba has a nice compiler that will show you errors if it cannot infer the type of even a single variable at compile time (which usually happens the first time you call your function at run-time).
The argument of "some code won't get faster" is null, since you typically only want to use `@njit`, which ensures that you're in `nopython` mode.
I guess that's a double edged sword, in that when it says `nopython`, it really does mean no python.
This means you can only use features from the python interpreter that the numba team has re-implemented in LLVM IR.
---
IIRC `@njit` does involve an overhead in `lowering` the types from python -> LLVM when the first njit function in the call graph is invoked, but not after that.
All this means that if you use `for-loops` in nopython mode, they are guaranteed to run faster, at least in my experience.
> Our main executable comes with a Just-in-Time compiler. It is really fast in running most benchmarks—including very large and complicated Python applications, not just 10-liners.
> There are two cases that you should be aware where PyPy will not be able to speed up your code:
> Short-running processes: if it doesn't run for at least a few seconds, then the JIT compiler won't have enough time to warm up.
> If all the time is spent in run-time libraries (i.e. in C functions), and not actually running Python code, the JIT compiler will not help.
It depends on the code. You don't get speed for free by using C extensions with CPython. You have to write code specifically for that extension. To get significant speed benefits from numpy, for example, you need specific knowledge of numpy and the code produced will be completely different from regular Python. Pypy, on the other hand, is essentially a free speed boost when you have standard python code.
What I have heard, though, is that string operations are more expensive in pypy than cpython.
Regardless, performance gains and regressions are unlikely to generalize; what's true for someone else's project may not be true for yours. If you're concerned about performance, I recommend benchmarking your own codebase.
This innocent (albeit convoluted) piece of code used to segfault on pypy 7.3.1 . It is now fixed in this release! It was a bug in their JIT.
def main(n):
# Note: exact value of n and prints are significant, don't change
a = [0] * n
for _ in range(n):
pos = -1
for i in range(n):
if i > 0:
print(a[i - 1], end="")
pos = i
print(a[pos], _, n)
main(191)
I really wish some python company invested money into making python 3.8+ compatible pypy, and running more tests on libraries for compatibility fixes. We could all use faster code.
Because 30+ years of experience have taught me to only use platform languages for production code.
Clojure, Kotlin and Scala are guests with temporary permit until the landlord acquires all features that matter to the masses, then they will join Beanshell and friends.
Julia and Common Lisp are the landlords of their own stacks.
As for the feature of the language themselves, I always considered myself polyglot, using the platform languages for each kind of deployment scenario.
Is there any chance for Pypy to work with Numba, especially with HPy coming? Yes, they're both JIT compilers, but I feel like the kind of optimizations they do are complementary rather than redundant. I want fast business logic with fast algorithmic code in the hot parts :)
That's a weird claim without specifying the context. Julia doesn't run python code, so that's not it. It doesn't use python's programmers... etc.
Is it "for new code which is maths/vector heavy and not relying on any existing environment, Julia is a valid competitor to pypy"? Or is there a better qualifier?
Currently my greatest concern is the inability of Julia to provide stability even after they've reached 1.0. On paper 1.0 is a LTS release and still supported, but how many packages actually support it anymore? There seems to be no guidelines either, the official stance is that "we move on and drop LTS when the package developers move on."
>but how many packages actually support it anymore?
99%? You won't get the most recent versions because a lot of packages started requiring v1.3 when the new AD and multhreading mechanisms came out, but you can still boot up the LTS and run any standard analysis that existed back when 1.0 was around, which is something like 3000+ packages. Where is your FUD coming from?
> I don’t see Julia a good replacement of Python. Julia has a long startup time. When you use a large package like Bio.jl, Julia may take 30 seconds to compile the code, longer than the actual running time of your scripts. You may not feel it is fast in practice. Actually in my benchmark, Julia is not really as fast as other languages, either. Probably my Julia implementations here will get most slaps. I have seen quite a few you-are-holding-the-phone-wrong type of responses from Julia supporters. Also importantly, the Julia developers do not value backward compatibility. There may be a python2-to-3 like transition in several years if they still hold their views by then. I wouldn’t take the risk.
[1]: https://morepypy.blogspot.com/2012/01/py3k-and-numpy-first-s...
[2]: https://morepypy.blogspot.com/2014/06/pypy3-231-fulcrum.html