Hacker News new | past | comments | ask | show | jobs | submit login
The different uses of Python type hints (lukeplant.me.uk)
78 points by BerislavLopac on April 15, 2023 | hide | past | favorite | 69 comments



I inherited a python based build system which had been written without any unit tests because....it's a build system right? right?? Anyhow it was the kind of thing where you'd build for 20 minutes (Android) and then have some kind of trivial error in the python code at the end of the build. You fix that and run again and waste another 20 minutes on the next thing.

I couldn't do unit tests all at once because of the design not having any way of accommodating them. I needed something to stop the enormous waste of time.

So I added type hints. The IDE should show me when illogical things were being done with parameters to methods/functions. It was fairly quick compared to a total refactoring but not effort free. I barely noticed any effect. Didn't catch a single error. Eventually I created a kind of dummy version of Android that "built" in a few seconds and I tested against that first. That allowed me to speed up changes and get some refactoring done to make a few critical unit tests and the whole thing started to get under control.

This anecdote has almost no meaning - you cannot conclude that type hints have no benefit because of one case - I just think that tests are almost always more important and hinting and the whole rigmarole of strong typing are much less of a panacea than tests are.


Again, purely anecdotally, I've done similar things in both Typescript and Python, and I usually catch at least one silly type error in Typescript, but I rarely do in Python. That could be because the Python developers I was following were just so good, but I suspect it's more about the quality of the typecheckers. Typescript feels much better at finding errors in normal, idiomatic Javascript, whereas I feel that with Mypy, if I want the best results, I need to write code in a way that plays to its strengths.

I've been told that Pyright is better but I've not tried it out properly. But yeah, your experience largely matches with mine, for Python at least.


IMO type hints are mostyl about (significantly!) improving readability.


Also about improving editor experience and having better auto-completion that's based on type hints.


PyCharm and VSCode can do a lot of reasonably smart refactorings when the code isn't just a pile of `typing.Any` and that's a huge quality of life for any non trivial code base.


The issue that I have with Python type hints is they they don't go nearly far enough in describing the data being manipulated. Specifically, I'm thinking of stuff like the dimensionality and cardinality of Numpy arrays or Pandas frames. Usually that's the stuff where I have most questions when I look at Python code and the type system as it's being used now offers no help there.


You very quickly get undecidable type checking with such a powerful typesystem. Then you need a way to handle that, usually the solution is to help the type checker along by providing a proof that your code inhabits the claimed type. Then you need a way to have the proof live together with the code, a proof language, and preferably a whole library of proofs people can build on.

If this sounds fun then you can go play with e.g idris or F*.


C++, Java and Rust seem to do well with undecidable type checking.


I'm not sure how a python annotation/type system could possibly do that? If numpy/pandas had different types for different cardinalities it would work today.

You just need those libraries to embrace it really, then you could theoretically have type constructors that provide well-typed NxM matrix types or whatever, allowing you to enforce that [[1,2],[3,4]] is an instance of matrix_t(2, 2).

I don't see how python could possibly make such inferences for arbitrary libraries.


PEP 646 Variadic generics, https://peps.python.org/pep-0646/, was made for this specific use case but mypy is still working on implementing it. And even with it, it's expected several more peps are needed to make operations on variadic types powerful enough to handle common array operations. numpy/tensorflow/etc do broadcasting a lot and that probably would need a type level operator Broadcast just to encode that. I also expect the type definitions for numpy will go fairly complex similar to template heavy C++ code after they add shape types.


I like typing that with strings like '(batch,r,a,s,channel,t)'. The tooling doesn't do anything special with it, but it makes the code understandable at a glance. Adopting libraries like einops and core routines like einsum in lieu of equivalent alternatives encourages the propagation of names (rather than rolling axes or whatever) anywhere it matters. Having a coding convention about the standard order of axes helps a bit as people get more familiar with that aspect of the codebase too, only deviating where necessary.


I like this comment because so far I don't know what you're talking about. That's the hallmark of something in this domain worth looking up but I figured you might be willing to share more on the matter. :)


The premise is that Python's type hints don't provide enough information about things like numpy arrays. Fixing that correctly is hard because things that matter in that sort of code include facts which could hypothetically be encoded in a type system:

1. What's the datatype of the array elements

2. Does this array alias other memory

3. Is the access pattern I want to do contiguous in memory

4. If you track the provenance of an array, does it include something like a "width" dimension and a "height" dimension

5. How many dimensions are there

6. What's a good semantic description (type) for each dimension

7. As an exact integer (or modulo some power of 2 or whatever), how big is each dimension

And on and on and on. An honest-to-goodness type hint capturing that sort of crap in a way that's statically analyzable is a nightmare, and it wouldn't be totally trivial to even write the code to make a type hint like that reasonable to read and write. Even if you could, it'd probably generate a lot of noise that for any particular use of an array would distract you from the aspects you care about.

A nice hybrid solution IMO is found in that Python allows arbitrary objects to be used as type hints, and a string description of the aspects you're using/providing on a particular array works as decent documentation for other developers. For a few examples:

1. The array should describe a typical 24-bit 3-channel image. You might use a type hint like 'u8:(w,h,3)' to indicate that it's a 0-255 integer field rather than a 0-1 float field, which dimensions have width/height/channels, and that it's a 3-channel image. It'd probably be good to also label those channels with a convention like 'u8:(w,h,(rgb))', like 'u8:(w,h,3):rgb', with hungarian typing, or something (no particular recommendations on my end since I'm not usually working with heterogeneous data like that, but choosing the wrong encoding or even the wrong coordinate space for RGB or whatever is a big deal, so you'd probably want to represent that somehow).

2. You have a function signature with multiple inputs, and the computation is mostly arbitrary, but it's important some dimensions align. Then label them the same. Something like matmul(left: '(n, d)', right: '(d, k)') -> '(n, k)'.

3. You're doing some ML thing on some time-series medical data, and it's common t' have giant dense tensors floating around. Label semantically what all the dimensions are with a type like `(batch, r, a, s, channel, t)', or using longer names as appropriate depending on your audience and the background knowledge you can assume.

Libraries like einops and functions like `np.einsum` take that a step further and require stringified descriptions of the operation you're trying to do. They can have a learning curve, but the crux of the idea is that instead of writing garbage like `arr[3,6,-4:,np.newaxis,...].T.reshape(4 n, -1)` or God-forbid some sort of roll/transpose logic, you have a higher-level description.

A couple examples with einsum:

1. The dot product of v and w is `np.dot(v, w)` or `np.sum(v w) # imagine there's an asterisk; HN's parser is smarter than me`, and it's also `np.einsum('d,d', v, w)`. Arguably einsum is a bit of syntactic noise for such a simple example, but if v and w have different shapes than you think then the simpler solutions will silently produce garbage (e.g., the first option will do matrix multiplications sometimes, and the second is arguably closer to correct most of the time, but if you think you're operating on 1D objects and actually do want a channeled operation like matrix multiplication when the input isn't 1D then the sum of products is wrong and not captured in the type system), but einsum will just barf if the stated dimensions don't match your expectations. Moreover, with optimize=True it'll actually fall back to whichever of the simpler solutions is fastest.

2. Imagine you have a matrix A of shape (n, n) and a matrix X of shape (n, d) and want to compute something like A @ v @ A.T for each column v of X. You can write it via standard numpy operators, but it looks like garbage and kind of hides what's actually happening. The einsum solution is just `np.einsum('vw,wd,nv->nd', A, X, A)`. You're contracting over `v` and `w` and left with `n` and `d`. It's not perfect since you just get single-letter names to work with, but it's a hell of a lot better than equivalent options, and much easier to make suitably fast (just pass optimize=True).

And then einops is even better because roll/transpose logic is incredibly fiddly and prone to off-by-one errors in your choice of dimension or needing to deeply understand how the function works to not make footgun-style mistakes. An API like `swapaxes(arr, 'batch', 'time')` is 10x easier to use than `swapaxes(arr, 0, 5)` -- like, imagine somebody adding an extra dimension in a world where positions are absolutely referenced and where if you get it wrong the program will still run and produce interesting-looking garbage because the definitions of `np.dot` and everything else in the library depend on the shape of the inputs.


Why would you do this instead of just a comment? I feel like type hints only have value if they can be used by the tooling.


The comment has to go somewhere, and IME it makes the common path easy and less common paths not too hard. In particular, to use those functions correctly and understand what they're doing, you often really do just need their name and such an augmented type signature. Having all that information in one place rather than having to extract it out of a docstring (a docstring which might not be shown by default in your editor without additional keystrokes or mouse movements and scrolling), and being able to immediately glance to pieces that don't stick around in short-term memory is nice.

A comment would be fine too, especially if it's right next to the type signature, but to do that you'd need to add extra newlines, and the comment would be in roughly the same spot as the type hint, so I don't know that you gain much. Mypy doesn't really like strings used that way, but mypy isn't a great tool anyway, so c'est la vie?

If somebody just wanted to throw that in a docstring I wouldn't complain though. It's definitely more important that the information exist than that it be in a particular place.


You want dependent types!


Python can actually do (some) dependent types with generics but it's not pretty.

The only real use-case that is both possible and worthwhile I've found is being able to say a value is a T if there's a default and an Optional[T] otherwise.


> Python can actually do (some) dependent types with generics

Could you show some examples?


Create a class what enforces this and use the new class as a type?


The trouble is that's not how any of the ML or data science Python code is written at the moment. Such practices could help although I think more elegant solutions should be explored.


A few other examples for the sections given:

Runtime behaviour determination: the stdlib [dataclasses](https://docs.python.org/3/library/dataclasses.html#module-da...)

Dataclasses is notable because it's the only example (I'm aware of) of type hints effecting runtime behavior as part of the stdlib.

Compiler instructions: mypyc was (one of?) the first to do this, but Cython actually supports this natively now, and is much more active than mypyc is last I checked.


The Annotated type is worth mentioning as well. Today in something like SQLModel you do (from the readme):

    class Hero(SQLModel, table=True):
        id: Optional[int] = Field(default=None, primary_key=True)
And that's fine. I wouldn't necessarily change anything here. Annotated gives you the option of approaching things in a different way, though.

    class Hero(SQLModel, table=True):
        id: PrimaryKey[Optional[int]] = None
I have some use cases where the alternative approach is useful, like quantification of class fields or function arguments.


FastApi very recently added support for Annotated, and now recommends using that over default arguments.

https://fastapi.tiangolo.com/release-notes/#0950


> Dataclasses is notable because it's the only example (I'm aware of) of type hints effecting runtime behavior as part of the stdlib.

FWIW, `typing.NamedTuple` did this in Python 3.5, three years before dataclasses was introduced in 3.7.

    class Foo(typing.NamedTuple):
        a: int
        b: str

    f = Foo(a=1, b="hello")
    print(f.b)  # "hello"


Variable annotations were only added in Python 3.6. Defining a typed namedtuple in Python 3.5 looked like this:

    Foo = typing.NamedTuple('Foo', [('a', int), ('b', str)])


Great catch, how could I forget!


I managed to find type hints affecting runtime behavior in functools.register. https://docs.python.org/3/library/functools.html?highlight=f...

> To add overloaded implementations to the function, use the register() attribute of the generic function, which can be used as a decorator. For functions annotated with types, the decorator will infer the type of the first argument automatically:

That appears to be the only other case.


Dataclasses, like typing.NamedTuple, do not care about type hints:

    >>> @dataclasses.dataclass
    ... class D:
    ...   x: int
    ...
    >>> D('a')
    D(x='a')


Dataclasses do care about type hints in some cases, for example when determining what counts as a field.

    @dataclass
    class A:
        a: int = 0
        b = 1

    >>> A(a=1, b=2)
    TypeError: __init__() got an unexpected keyword argument 'b'


The idea with type hints in Python though is that they’re meant to be checked using some static analysis tool like mypy/pyright/etc. The runtime behavior for the most part remains unchanged in the sense that the Python interpreter won’t enforce the types in cases such as the one you’ve provided.



Protocols with @runtime_checkable fit your description!


I'm a heavy user of type hints and enable pyright and mypy's strict modes whenever possible. However, you can't always be strict: if you use almost any package in the data science/ML ecosystem, you're unlikely to get good type inference and checking[1]. In those cases, it can still be useful to type some parameters and return values to benefit from _some_ checking, even if you don't have 100% coverage.

Type hints also bring improved completion, which is nice too.

[1] For example, huggingface's transformers library decided to drop support for full type checking because it was unsustainable but decided to keep the types for documentation[2]. There are stubs for pandas, but they're not enough because pandas has a tendency to change return types based on the input, and that breaks quickly.

[2] https://github.com/huggingface/transformers/pull/18485


> There are stubs for pandas, but they're not enough because pandas has a tendency to change return types based on the input, and that breaks quickly.

A mechanism like Haskell's type application seems like it could solve at least most, maybe all of those problems.


I can confirm from experience using Frames[0] and type applications together this is very much the case.

0: https://hackage.haskell.org/package/Frames


This sort of thing is why I gave up Python. I could see having strong typing. Or optional strong typing. But unchecked type hints are just silly.

The way everybody else seems to be going is strong typing at function interfaces, with automatic inference of as much else as can be done easily. C++ (since "auto"), Go, Rust, etc.


> The way everybody else seems to be going is strong typing at function interfaces, with automatic inference of as much else as can be done easily

Both mypy and pyright will do that. If your function return type is annotated, they will infer the type of the receiving variable. If you have two branches where a variable can receive two types, pyright will infer the union type. Similar for None.

Example:

    a = input()
    if a.isdigit():
        x = int(a)
    else:
        x = a
    reveal_type(a)
    reveal_type(x)
Pyright output, stripped of configuration noise:

    typetest.py:6:13 - information: Type of "a" is "str"
    typetest.py:6:13 - information: Type of "x" is "int | str"
Mypy doesn't allow this. It infers `a` as `int` and rejects the second assignment.

The only times I need to annotate local variables are (1) the function isn't typed, so it gets inferred as Any (2) I'm initialising an empty collection, so its type might get inferred as e.g. `list[Unknown]` (pyright; mypy can infer the element type).

Is there something inference-wise that you miss in Python compared to C++ or Go?

PS: The larger problem to me is the inconsistence between pyright and mypy, the leading type-checkers. Sometimes issues are raised between them and they work to achieve agreement, but I believe the two issues highlighted above (unions and collections) are design choices, unlikely to change.


> Both mypy and pyright will do that.

This still makes me seethe. We have pip, poetry, conda, and more. The Python folks knew that multiple incompatible systems would arise from a grammar spec without a behavior spec. And here we are. Python doesn't do anything useful with the types, but third-parties are left to their own devices.


> The Python folks knew that multiple incompatible systems would arise from a grammar spec without a behavior spec.

Python typecheckers predate in-language annotations and drove the spec, not vice versa.


Function annotations were added as part of Python 3.0, with PEP 3107, proposed in 2006 [0]. The first public mypy commit was in 2012 [1], and originally it was using C++-style syntax (`list<Token> lex(str s):`). The `typing` module and the official use of function annotation for type hints came in 2014 with PEP 484 [2], inspired by mypy. The first pyright commit was in 2019 (with a lot of code in the third commit [3], possibly moved from the VSCode extension).

[0] https://peps.python.org/pep-3107/

[1] https://github.com/python/mypy/commit/6f0826a9c169c4f05bb893...

[2] https://peps.python.org/pep-0484/

[3] https://github.com/microsoft/pyright/commit/1d91744b1f268fd0...


I agree. I believe the type checker should've been part of the interpreter, or a module like pip. Differences between mypy and pyright drive me crazy.

That said, I don't think this a reason not to use type hints. They're still useful (see TFA), even if they're not perfect.


With python you can have your cake and it eat it, though.

Mock up something fast, no type hints.

Now, take that POC and make it production ready, by using mypy and pydantic.


> Now, take that POC and make it production ready, by using mypy and pydantic.

Than watch it exploding in production because your "type system" is incomplete and unsound.

In my opinion an unsound static type-system is worse than no static type-system at all. In both cases you need to check everything manually. But without such pseudo type-checking you at least don't get lulled into a false sense of security.


But why does it need additional dependencies just to get all language features?


The Python community dislikes putting fast-changing things in the stdlib, because being in the stdlib slows down the development, and ties any improvements to upgrading Python.

(This is also why there is no good HTTP client library in the stdlib, even though the popular `requests` library gets new releases every 6 months with minor fixes only)


Not to start a religious war, but I think Ruby screwed-up on gradual typing making it too complex and too many steps. Attempting to maintain perfect Microsoft-legacy-style compatibility rather than have a hard change is the greater failure than having a truly new major version rather than arbitrary marketing increments.

Crystal is compiled with static typing but looks like Ruby. The type specification it uses emulates gradual typing of dynamic languages.


How does Ruby’s system look? Python’s type hints are fully optional and opt-in, typed and non-typed code works the same (even though type checkers may complain about the latter).


I believe the Ruby designers have refused to add syntax for type hints, so they need to either be in comments on separate lines from the code itself or even in a separate file. They are therefore less ergonomic to use - but the increased separation from the code means that they are mostly used purely as (machine-verifiable) comments. On the other hand Python's type hints tend to be deeply intertwined with the code, and are even required to access new language features such as dataclasses.

The two languages take such different approaches because their designers have different feelings about static typing. Guido and the Steering Council seem to want Python to be as statically-typed as possible, whereas Matz thinks "static type declaration is redundant" [0].

[0] https://evrone.com/yukihiro-matsumoto-interview


Some time ago I made a dependency injector[0] in Python using type hints. I have always enjoyed playing with type systems and I wanted to explore Python's.

I remember that it felt rough. I had issues specially with funcions using veriadic types in generics. I also remember having issues with overloading a function: sometimes it would go for the more generic one, instead of going for the more specific one when inferring types.

I managed to solve all of that. Unfortunately, that happened some time ago and I don't remember the specifics, only that it was a fun project to develop. I use it frequently in other projects.

[0]: https://gitlab.com/applipy/applipy_inject


If you accept my -in advance- apology, why this type thing seem ugly -distraction,complex, hard to act, ...- to me ? I'm not against it , as in TS (non enforced type checking ), it is a lovely addition to python, but I'm really struggling to read, write and .. this syntax . Not sure if it is python's nature, but as most of us, C, C++, ...., TS this is a journey, evolution , but the root C is some kind of cult we attached to. Is it preventing me to love this thing ( thanks for effort )

Do not know?


Additions to language always confront syntax expectations. Some people can see through syntax to semantic intent, alas I am not one and the utility of a syntactic form goes (to me at least) beyond expressiveness to comprehension: if your syntax confuses then how can anyone comprehend?

Haskell dies in syntax.


For a bit of a quantitative analysis on this, we had fun doing surveying programmers here, see Table 7 @ https://lmeyerov.github.io/projects/socioplt/papers/oopsla20...

Ex: Programmers value types much more for documentation vs preventing bugs. I had not expected that answer!


Yeah I've found when trying (and failing mostly) to convince Python developers to use type hints that they often think it's only to fix bugs and then resist because a) of course their code doesn't have bugs, and b) they've already written their code and discovered lots of bugs painfully at runtime, so they wouldn't get most of the bug-finding benefits anyway.

They never appreciate that type hints make code easier to read and write and navigate and understand and maintain.

Often it's Vim users that have never used a good IDE.


Haha I was one of those vim users. Lately I've found sublime text with pyright is an effective setup for python development.

Electon-based IDE seem to require constant access to a power outlet, at least on my laptop.


Honest question: Are you coding in a tent, outside of civilization?

I've never understood this "battery argument".


Not sure its an honest question. But sometimes I work on the train. Or in the cute coffee place down the street.


It was a honest question. (But got down-voted nevertheless… :-))

I'm not a fan of Electon-based "apps" either, as they're very problematic in all kinds of ways.

But I never could, and still can't, understand the "battery argument".

Battery life is not the biggest issue with Electon (and actually IntelliJ is even a greater power sucker).

The point is: Most people doing software development never "work" anywhere where there's no power outlet!

Most modern trains have power outlets. Likely every coffee place in the western world has power outlets.

If you're not an a safari, or in the forests, or desert there always will be a power outlet near you.

So the whole premise of "I need 10h of battery life" is moot. You actually never need that!

Even if you work sometimes on the go the battery needs only to last as long as you're moving form power outlet to power outlet.

Everything else boils down to: People these days are even too lazy plug in a charger…


Haha you are right that I typically am working within 30 feet of an outlet. But not usually seated within 6 feet of an outlet.

Maybe we can invent wireless power to go along with the wireless ethernet?

Personally I would prefer to just use less power. Chips have never been more efficient, and batteries have never been bigger. With the right software, even an older laptop can last all day.


> I enjoy using static types: 18% (+/- 8)

That number seems to be from 2013. I imagine it would be much higher today - 10 years ago dynamic typing was at the peak of its hype cycle, and right now it seems to be in the Trough of Disillusionment.


That's a legit hypothesis!

Speaking as a scientist, I'd love to see a round 2 of this work to see how much things are the same vs different. The premise for the work was doing more serious sociological methods could help understand tough phenomena here & suggest new solutions, so bringing in longitudinal analyses would be fascinating.

FWIW, if I remember right:

- Languages like Java, C++, .NET were the most popular. Maybe iOS/Android apps too?

- Buzz around then were Scala + Java (big data), Haskell, Elm, D, and the beginnings of Rust.

- TypeScript was already a year or two in as well. But population-wise, probably still niche.

- That was probably also some of the heaviest Node

Nowadays, we also see heavy rises in dynamic languages:

- Professional data scientists using Python & R. I think Python has become the #1 language for new folks?

- Go looks like a statically typed language... until you compare it to modern C++, D, Rust, etc

So I wouldn't be surprised to see a shift.. but then again, not at all obvious how much, and especially controlling for selection bias..


I greatly hope and somewhat expect AI code generation will lead people to focus more on correctness: tests, type signatures, and (ideally, eventually) dependent types.


The only reason to consider type hints is for a performance increase and there wasn't any mention of that. What can you really expect from using type hints accurately?


> The only reason to consider type hints is for a performance increase

No, the reason to consider type hints is because it makes it easier to understand existing code and write new code that interacts with it correctly. Comprehensibility and correctness are more important than performace.


In a dynamically typed language, that's what comments are for. Since you didn't need types specified in the first place, why would you have trouble interacting with it? Comprehensibility and correctness are just as important as performance.


Type hints are great when using FastAPI. Your inputs are automatically validated, and you get a /docs endpoint that tells people what to expect from your API.

I'd say performance is far from the only reason to consider type hints.


Similarly for Typer, which is literally "the FastAPI of CLIs"[1]. Handy to type your `main` parameters and have CLI argument parsing. For more complicated cases, it's a wrapper around Click.

[1] https://typer.tiangolo.com/


mypyc does that: https://github.com/mypyc/mypyc

> Mypyc compiles Python modules to C extensions. It uses standard Python type hints to generate fast code. Mypyc uses mypy to perform type checking and type inference.

> Mypyc can compile anything from one module to an entire codebase. The mypy project has been using mypyc to compile mypy since 2019, giving it a 4x performance boost over regular Python.

I have not experience a 4x boost, rather between 1.5x and 2x. I guess it depends on the code.


> Compiler instructions

> I don’t know how many people are doing this, but tools like mypyc will use type hints to compile Python code to something faster, like C extensions.


Using typed dependencies makes the Python IDE experience significantly nicer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: