Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn't it the case that Python allows for type specifier (type hints) since 3.5, albeit the CPython interpreter ignores them? The JIT might take advantage of them, which ought to improve performance significantly for some code.

That what makes Python flexible is what makes it slow. Restricting the flexibility were possible offers opportunities to improve performance (and allows for tools and humans to spot errors more easily).




AFAIK good JITs like V8 can do runtime introspection and recompile on the fly if types change. Maybe using the type hints will be helpful but I don't think they are necessary for significant improvement.


Doesn't Python already do this? https://www.youtube.com/watch?v=shQtrn1v7sQ


Are there any benchmarks that give an idea of how much this might improve Python's speed?


Well, GraalPython is a Python JIT compiler which can exploit dynamically determined types, and it advertises 4.3x faster, so it's possible to do drastically better than a few percent. I think that's state of the art but might be wrong.

That's for this benchmark:

https://pyperformance.readthedocs.io/

Note that this is with a relatively small investment as these things go, the GraalPython team is about ~3 people I guess, looking at the GH repo. It's an independent implementation so most of the work went into being compatible with Python including native extensions (the hard part).

But this speedup depends a lot on what you're doing. Some types of code can go much faster. Others will be slower even than CPython, for example if you want to sandbox the native code extensions.


This is great info, thanks!


Pypy is a different JIT that gives anything from slower/same to 100x speedup depending on the benchmark. They give a geometric mean of 4.8x speedup across their suite of benchmarks. https://speed.pypy.org/


Isn't CL a good counter-example to that "dynamism inherently stunts performances" mantra?


To the contrary. In CL some flexibility was given up (compared to other LISP dialects) in favor of enabling optimizing compilers, e.g. the standard symbols cannot be reassigned (also preserving the sanity of human readers). CL also offers what some now call 'gradual typing', i.e. optional type declarations. And remaining flexibility, e.g. around the OO support, limits how well the compiler can optimize the code.


But type declarations in Python are not required to be correct, are they? You are allowed to write

    def twice(x: int) -> int:
        return x + x

    print(twice("nope"))
and it should print "nopenope". Right?


The Python language server in Visual Studio Code will catch this if type checking is turned on, but by default, in CPython, that code will just work.


Yep. Therefore it’s better to

   def twice(x: int) -> int:
   if not isinstance(x, int):
           raise TypeError("Expected x to be an int, got " + str(type(x)))
    return x + x


Surely this is the job for a linter or code generator (or perhaps even a hypothetical ‘checked’ mode in the interpreter itself)? Ain’t nobody got time to add manual type checks to every single function.


Of course not. That's what MyPy is for. It was only about the answer to exactly this question in this function.


This can have substantial performance implications, not to mention DX considerations.


Of course, this is not a good example of good, high-performance code, only an answer to the specific question... the questioner certainly also knows MyPy.


I actually don't know anything about MyPy, only that it exists. Does it run that example correctly, that is, does it print "nopenope"? Because I think it's the correct behaviour, type hints should not actually affect evaluation (well, beyond the fact that they must be names that are visible in the scopes thay're used in, obviously), altough I could be wrong.

Besides, my point was that one of the reasons why languages with (sound-ish) static types manage to have better performance because they can omit all of those run-time type checks (and the supporting machinery) because they'd never fail. And if you have to put those explicit checks, then the type hints are actually entirely redundant: e.g. Erlang's JIT ignores type specs, it instead looks at the type guards in the code to generate specialized code for the function bodies.


Or use mypy.


Of course dynamism limits performance (and as said, standard symbols and class is also an unhygienic macro thing) but I meant that you can have both high performance and high dynamism in a programming language, dynamism itself is no excuse to not even try.


Standard symbols being reassigned also breaks macros.


Sort of! But also not really. If you want to get into this, I wrote a post about this: https://bernsteinbear.com/blog/typed-python/


I doubt it with a copy-and-patch JIT, not the way they work now. I'm a serious mypy/python-static-types user and as is they currently wouldn't allow you to do much optimization wise.

- All integers are still big integers

- Use of the typing opt-out 'Any' is very common

- All functions/methods can still be overwritten at runtime

- Fields can still be added and removed from objects at runtime

The combination basically makes it mandatory to not use native arithmetic, allocate everything on the heap, and need multiple levels of indirection for looking up any variable/field/function. CPU perf nightmare. You need a real optimizing JIT to track when integers are in a narrow range and things aren't getting redefined at runtime.


You can't really on type annotations to help interpret the code.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: