Isn't it the case that Python allows for type specifier (type hints) since 3.5, albeit the CPython interpreter ignores them? The JIT might take advantage of them, which ought to improve performance significantly for some code.
That what makes Python flexible is what makes it slow. Restricting the flexibility were possible offers opportunities to improve performance (and allows for tools and humans to spot errors more easily).
AFAIK good JITs like V8 can do runtime introspection and recompile on the fly if types change. Maybe using the type hints will be helpful but I don't think they are necessary for significant improvement.
Well, GraalPython is a Python JIT compiler which can exploit dynamically determined types, and it advertises 4.3x faster, so it's possible to do drastically better than a few percent. I think that's state of the art but might be wrong.
Note that this is with a relatively small investment as these things go, the GraalPython team is about ~3 people I guess, looking at the GH repo. It's an independent implementation so most of the work went into being compatible with Python including native extensions (the hard part).
But this speedup depends a lot on what you're doing. Some types of code can go much faster. Others will be slower even than CPython, for example if you want to sandbox the native code extensions.
Pypy is a different JIT that gives anything from slower/same to 100x speedup depending on the benchmark. They give a geometric mean of 4.8x speedup across their suite of benchmarks.
https://speed.pypy.org/
To the contrary. In CL some flexibility was given up (compared to other LISP dialects) in favor of enabling optimizing compilers, e.g. the standard symbols cannot be reassigned (also preserving the sanity of human readers). CL also offers what some now call 'gradual typing', i.e. optional type declarations. And remaining flexibility, e.g. around the OO support, limits how well the compiler can optimize the code.
Surely this is the job for a linter or code generator (or perhaps even a hypothetical ‘checked’ mode in the interpreter itself)? Ain’t nobody got time to add manual type checks to every single function.
Of course, this is not a good example of good, high-performance code, only an answer to the specific question... the questioner certainly also knows MyPy.
I actually don't know anything about MyPy, only that it exists. Does it run that example correctly, that is, does it print "nopenope"? Because I think it's the correct behaviour, type hints should not actually affect evaluation (well, beyond the fact that they must be names that are visible in the scopes thay're used in, obviously), altough I could be wrong.
Besides, my point was that one of the reasons why languages with (sound-ish) static types manage to have better performance because they can omit all of those run-time type checks (and the supporting machinery) because they'd never fail. And if you have to put those explicit checks, then the type hints are actually entirely redundant: e.g. Erlang's JIT ignores type specs, it instead looks at the type guards in the code to generate specialized code for the function bodies.
Of course dynamism limits performance (and as said, standard symbols and class is also an unhygienic macro thing) but I meant that you can have both high performance and high dynamism in a programming language, dynamism itself is no excuse to not even try.
I doubt it with a copy-and-patch JIT, not the way they work now. I'm a serious mypy/python-static-types user and as is they currently wouldn't allow you to do much optimization wise.
- All integers are still big integers
- Use of the typing opt-out 'Any' is very common
- All functions/methods can still be overwritten at runtime
- Fields can still be added and removed from objects at runtime
The combination basically makes it mandatory to not use native arithmetic, allocate everything on the heap, and need multiple levels of indirection for looking up any variable/field/function. CPU perf nightmare. You need a real optimizing JIT to track when integers are in a narrow range and things aren't getting redefined at runtime.
That what makes Python flexible is what makes it slow. Restricting the flexibility were possible offers opportunities to improve performance (and allows for tools and humans to spot errors more easily).